Bug 706141

Summary: gfs2 mounts doubled up in mtab
Product: Red Hat Enterprise Linux 6 Reporter: Tim Wilkinson <twilkins>
Component: clusterAssignee: Abhijith Das <adas>
Status: CLOSED ERRATA QA Contact: Cluster QE <mspqa-list>
Severity: medium Docs Contact:
Priority: medium    
Version: 6.1CC: adas, anprice, bmarson, bmarzins, ccaulfie, cluster-maint, jpayne, kzhang, lhh, perfbz, rpeterso, rwheeler, teigland, tmarshal
Target Milestone: beta   
Target Release: 6.2   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: cluster-3.0.12.1-7.el6 Doc Type: Bug Fix
Doc Text:
Cause: Incorrect logic in mount.gfs2 when trying to find mtab entries to delete. The exact device/mount paths were not being compared Consequence: During remounts, the original entry is not found and hence not delete. This results in double mtab entries Fix: Patch uses realpath() on the device/mount paths so that they match up with what's in mtab. Result: The correct original mtab entry is deleted during a remount and a replacement entry with the new mount options is inserted in its place.
Story Points: ---
Clone Of: Environment:
Performance Engineering group cluster
Last Closed: 2011-12-06 14:52:03 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
first bash at a patch
none
updated patch none

Description Tim Wilkinson 2011-05-19 15:36:30 UTC
Description of problem:
----------------------
With two gfs2 mounts defined in /etc/fstab, after a system boot the 'df' command reflects two of each of these mounts. 



Component Version-Release:
-------------------------
kernel 2.6.32-125.el6.x86_64
gfs2-utils-3.0.12.35.el6 x86_64



How reproducible:
----------------
consistent



Steps to Reproduce:
------------------
1. define a gfs mount in /etc/fstab
2. start gfs2 service



Actual results:
--------------
df reports double mounts ...
# df
Filesystem           1K-blocks      Used Available Use% Mounted on
 ...
/dev/mapper/vg_vol1-lv_oracleHome
                      83877056   5483600  78393456   7% /ora/db
/dev/mapper/vg_vol1-lv_iozone1
                      83877056    132436  83744620   1% /RHTSspareLUN1
/dev/mapper/vg_vol1-lv_oracleHome
                      83877056   5483600  78393456   7% /ora/db
/dev/mapper/vg_vol1-lv_iozone1
                      83877056    132436  83744620   1% /RHTSspareLUN1



Expected results:
----------------
df reports a single entry for each gfs2 mount ...
# df
Filesystem           1K-blocks      Used Available Use% Mounted on
 ...
/dev/mapper/vg_vol1-lv_oracleHome
                      83877056   5483600  78393456   7% /ora/db
/dev/mapper/vg_vol1-lv_iozone1
                      83877056    132436  83744620   1% /RHTSspareLUN1



Workaround:
----------
umount the mounts by hand twice. If the umount occurs only once, the remount attempt will fail reporting the mounts are already present even though df doesn't see them. The 2nd umount attempt fails and then the mount will succeed and reflect one version of each of the gfs2 mounts.



Additional info:
---------------
While /proc/mounts shows only single entries of these mounts, /etc/mtab lists the double entries ... one set of which includes the nobarrier option which was NOT specified as an option in the fstab file.


# grep gfs2 /etc/fstab
/dev/mapper/vg_vol1-lv_oracleHome /ora/db  gfs2 defaults,noatime,nodiratime  0 0
/dev/mapper/vg_vol1-lv_iozone1 /RHTSspareLUN1   gfs2 defaults,noatime,nodiratime  0 0 


# grep gfs2 /proc/mounts
/dev/dm-23 /ora/db gfs2 rw,noatime,nodiratime,hostdata=jid=1,nobarrier 0 0
/dev/dm-20 /RHTSspareLUN1 gfs2 rw,noatime,nodiratime,localflocks,localcaching,nobarrier 0 0


# grep gfs2 /etc/mtab
/dev/mapper/vg_vol1-lv_oracleHome /ora/db gfs2 rw,noatime,nodiratime,hostdata=jid=1 0 0
/dev/mapper/vg_vol1-lv_iozone1 /RHTSspareLUN1 gfs2 rw,noatime,nodiratime,localflocks,localcaching 0 0
/dev/mapper/vg_vol1-lv_oracleHome /ora/db gfs2 rw,noatime,nodiratime,hostdata=jid=1,nobarrier 0 0
/dev/mapper/vg_vol1-lv_iozone1 /RHTSspareLUN1 gfs2 rw,noatime,nodiratime,localflocks,localcaching,nobarrier 0 0



tuned-adm active profile: enterprise-storage

Comment 2 RHEL Program Management 2011-05-26 16:29:37 UTC
This request was evaluated by Red Hat Product Management for inclusion
in a Red Hat Enterprise Linux maintenance release. Product Management has 
requested further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed 
products. This request is not yet committed for inclusion in an Update release.

Comment 3 Barry Marson 2011-06-14 14:00:42 UTC
THis was discovered on the cluster when each node of the cluster had mutually exclusive gfs2 file systems in the fstab file.  At boot up, tuned was also installed and configured with enterprise-storage.  

The mount -o remount is the culprit.  /proc/mounts shows only a single entry, but the mtab file shows the original mount, and the tuned remount.

Barry

Comment 4 Steve Whitehouse 2011-06-14 14:09:08 UTC
Thanks for the info wrt remount - that is probably why I didn't see it when I tried to reproduce it. It should be reasonably easy to get fixed now.

Comment 6 Abhijith Das 2011-07-15 20:05:42 UTC
Created attachment 513440 [details]
first bash at a patch

Comment 7 Abhijith Das 2011-07-18 19:37:07 UTC
The patch in comment #6 fixes the issue.

Without the patch I see the following:

[root@smoke-02 ~]# mount -t gfs2 /dev/smoke_vg/100g /mnt/gfs2/

[root@smoke-02 ~]# cat /etc/mtab | grep gfs2
/dev/mapper/smoke_vg-100g /mnt/gfs2 gfs2 rw,relatime,hostdata=jid=0 0 0

[root@smoke-02 ~]# mount -o remount,noatime /mnt/gfs2
file system mounted on /mnt/gfs2 not found in mtab

[root@smoke-02 ~]# cat /etc/mtab | grep gfs2
/dev/mapper/smoke_vg-100g /mnt/gfs2 gfs2 rw,relatime,hostdata=jid=0 0 0
/dev/mapper/smoke_vg-100g /mnt/gfs2 gfs2 rw,noatime,hostdata=jid=0 0 0

I unmounted twice to get rid of the mtab entries and tried on another node in the cluster that has the patch applied to mount.gfs2:

[root@smoke-01 ~]# mount -t gfs2 /dev/smoke_vg/100g /mnt/gfs2/

[root@smoke-01 ~]# cat /etc/mtab | grep gfs2
/dev/mapper/smoke_vg-100g /mnt/gfs2 gfs2 rw,relatime,hostdata=jid=0 0 0

[root@smoke-01 ~]# mount -o remount,noatime /mnt/gfs2

[root@smoke-01 ~]# cat /etc/mtab | grep gfs2
/dev/mapper/smoke_vg-100g /mnt/gfs2 gfs2 rw,noatime,hostdata=jid=0 0 0

Comment 8 Abhijith Das 2011-07-21 20:02:46 UTC
Created attachment 514298 [details]
updated patch

Comment 10 Andrew Price 2011-07-26 09:15:52 UTC
Setting the component to cluster as the patch is against the mount tool.

Comment 13 Abhijith Das 2011-10-27 14:38:48 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
Cause: Incorrect logic in mount.gfs2 when trying to find mtab entries to delete. The exact device/mount paths were not being compared

Consequence: During remounts, the original entry is not found and hence not delete. This results in double mtab entries

Fix: Patch uses realpath() on the device/mount paths so that they match up with what's in mtab.

Result: The correct original mtab entry is deleted during a remount and a replacement entry with the new mount options is inserted in its place.

Comment 14 Justin Payne 2011-11-08 16:57:29 UTC
Verified in gfs2-utils-3.0.12.1-23.el6.x86_64

[root@dash-02 ~]# cat /etc/mtab |grep share0
/dev/mapper/dash-share0 /mnt/share0 gfs2 rw,seclabel,relatime,hostdata=jid=1 0 0

[root@dash-02 ~]# mount -o remount,noatime /mnt/share0

[root@dash-02 ~]# cat /etc/mtab |grep share0
/dev/mapper/dash-share0 /mnt/share0 gfs2 rw,seclabel,noatime,hostdata=jid=1 0 0

[root@dash-02 ~]# rpm -q gfs2-utils
gfs2-utils-3.0.12.1-23.el6.x86_64

Comment 15 errata-xmlrpc 2011-12-06 14:52:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2011-1516.html