Bug 350571 - GFS: on a shared root its impossible to run gfs_tool lockdump on the root fs
GFS: on a shared root its impossible to run gfs_tool lockdump on the root fs
Status: CLOSED INSUFFICIENT_DATA
Product: Red Hat Cluster Suite
Classification: Red Hat
Component: gfs (Show other bugs)
4
All Linux
medium Severity medium
: ---
: ---
Assigned To: Robert Peterson
GFS Bugs
:
Depends On:
Blocks: 359331
  Show dependency treegraph
 
Reported: 2007-10-24 11:10 EDT by Josef Bacik
Modified: 2010-01-11 22:16 EST (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2009-01-20 14:39:38 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
patch to fix the problem. (694 bytes, patch)
2007-10-24 11:10 EDT, Josef Bacik
no flags Details | Diff

  None (edit)
Description Josef Bacik 2007-10-24 11:10:44 EDT
On a shared root fs you cant get a lockdump by doing a gfs_tool lockdump / 
because /proc/mounts looks like this

rootfs on / type rootfs (rw)
none on /dev type tmpfs (rw)
/dev/vg_zhlr421_sr/lv_sharedroot on / type gfs (rw,noatime,nodiratime)


This patch resolves the problem.
Comment 1 Josef Bacik 2007-10-24 11:10:44 EDT
Created attachment 236261 [details]
patch to fix the problem.
Comment 2 Robert Peterson 2007-10-30 17:26:16 EDT
I'll take it.
Comment 3 Robert Peterson 2007-10-30 17:47:58 EDT
Deferring to 4.7 as per discussion with Josef.
Comment 4 Rob Kenna 2007-10-30 17:51:06 EDT
Same problem on RHEL5 with gfs{,2}_tool?
Comment 5 Robert Peterson 2007-10-30 18:19:41 EDT
This is applicable to RHEL5 GFS1 and I opened bug #359331 for it.
I don't know for sure if there is a similar problem with GFS2 but the
code is completely different.  I assume not, but it's worth testing.
I don't currently have a node set up with gfs2 as its root, so it's
hard to tell.  Perhaps Josef can try it?
Comment 6 Robert Peterson 2007-11-09 15:26:37 EST
Hey Josef: The output from your opening comments looks more like what
the mount command prints.  It doesn't look like /proc/mounts to me.
Can you cat /proc/mounts once and paste it in a comment?
I don't have a gfs shared root system, and I'm trying to understand
the differences more completely.
Comment 7 Josef Bacik 2007-11-19 11:05:18 EST
ÀÄ(%:~/sysreports/134848)Ä- cat zhlr421a/proc/mounts 
rootfs / rootfs rw 0 0
none /dev tmpfs rw 0 0
/dev/vg_zhlr421_sr/lv_sharedroot / gfs rw,noatime,nodiratime 0 0
/dev/vg_zhlr421_sr/lv_sharedroot /cdsl.local gfs rw,noatime,nodiratime 0 0
none /dev tmpfs rw 0 0
/proc /proc proc rw,nodiratime 0 0
/proc/bus/usb /proc/bus/usb usbfs rw 0 0
/sys /sys sysfs rw 0 0
none /dev/pts devpts rw 0 0
none /dev/shm tmpfs rw 0 0
/dev/vg_local/lv_tmp /tmp ext3 rw 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0
sunrpc /cluster/shared/var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0
/dev/vg_scratch/lv_scratch /scratch gfs rw,noatime,nodiratime 0 0
/dev/vg_products/lv_products /products gfs rw,noatime,nodiratime 0 0
/dev/vg_zhlr421_sr/lv_sharedroot /oracle gfs rw,noatime,nodiratime 0 0
none /tmp/fence_tool/dev/pts devpts rw 0 0
/dev/vg_P04user/lv_P04user /cluster/mount/vg_P04user/lv_P04user gfs 
rw,noatime,nodiratime 0 0
/dev/vg_P04user/lv_P04user /usr/sap/E04 gfs rw,noatime,nodiratime 0 0
/dev/vg_P04user/lv_P04user /sapdb gfs rw,noatime,nodiratime 0 0
/dev/vg_P04user/lv_P04user /oracle/P04 gfs rw,noatime,nodiratime 0 0
/dev/vg_P04sap1/lv_sap1 /cluster/mount/vg_P04sap1/lv_sap1 gfs 
rw,noatime,nodiratime 0 0
/dev/vg_P04sap1/lv_sap1 /sapmnt/E04 gfs rw,noatime,nodiratime 0 0
/dev/vg_P04origlogA/lv_P04origlogA /oracle/P04/origlogA gfs 
rw,noatime,nodiratime 0 0
/dev/vg_P04origlogB/lv_P04origlogB /oracle/P04/origlogB gfs 
rw,noatime,nodiratime 0 0
/dev/vg_P04origlogC/lv_P04origlogC /oracle/P04/origlogC gfs 
rw,noatime,nodiratime 0 0
/dev/vg_P04origlogD/lv_P04origlogD /oracle/P04/origlogD gfs 
rw,noatime,nodiratime 0 0
/dev/vg_P04mirrlogA/lv_P04mirrlogA /oracle/P04/mirrlogA gfs 
rw,noatime,nodiratime 0 0
/dev/vg_P04mirrlogB/lv_P04mirrlogB /oracle/P04/mirrlogB gfs 
rw,noatime,nodiratime 0 0
/dev/vg_P04mirrlogC/lv_P04mirrlogC /oracle/P04/mirrlogC gfs 
rw,noatime,nodiratime 0 0
/dev/vg_P04mirrlogD/lv_P04mirrlogD /oracle/P04/mirrlogD gfs 
rw,noatime,nodiratime 0 0
/dev/vg_P04lc3log/lv_P04lc3log /sapdb/L03/log gfs rw,noatime,nodiratime 0 0
/dev/vg_P04lc3data/lv_P04lc3data /sapdb/L03/data gfs rw,noatime,nodiratime 0 0
/dev/vg_P04lc3arch/lv_P04lc3arch /sapdb/L03/arch gfs rw,noatime,nodiratime 0 0
/dev/vg_P04lc3bck/lv_P04lc3bck /sapdb/L03/backup gfs rw,noatime,nodiratime 0 0
/dev/vg_P04data1/lv_P04data1 /oracle/P04/sapdata1 gfs rw,noatime,nodiratime 0 
0
/dev/vg_P04data2/lv_P04data2 /oracle/P04/sapdata2 gfs rw,noatime,nodiratime 0 
0
/dev/vg_P04data3/lv_P04data3 /oracle/P04/sapdata3 gfs rw,noatime,nodiratime 0 
0
/dev/vg_P04data4/lv_P04data4 /oracle/P04/sapdata4 gfs rw,noatime,nodiratime 0 
0
10.226.3.20:/usr/sap/trans/EPS /usr/sap/E04/trans/EPS nfs 
rw,v3,rsize=32768,wsize=32768,hard,tcp,lock,proto=tcp,timeo=600,retrans=5,addr=10.226.3.20 
0 0
10.226.3.20:/usr/sap/trans/ext_preprod_P04 /usr/sap/E04/trans/ext_preprod_P04 
nfs 
rw,v3,rsize=32768,wsize=32768,hard,tcp,lock,proto=tcp,timeo=600,retrans=5,addr=10.226.3.20 
0 0
Comment 8 Robert Peterson 2008-03-11 12:39:47 EDT
I believe that co-requisite bug #431945 and bug #421761 should solve this
issue.  My plan is to do the fixes for both of them and make sure that they
solve this problem as well.
Comment 9 Robert Peterson 2008-04-03 11:31:41 EDT
I'm still waiting to hear if the fixes referred to in comment #8 solve
this problem.  I spoke to Toure who said he was waiting for work from
the customer.  So I'm putting this into NEEDINFO and adding him to the
cc list.
Comment 10 Steve Whitehouse 2008-12-10 12:47:40 EST
I think this can be closed now?
Comment 11 Robert Peterson 2009-01-20 14:39:38 EST
Closing as INSUFFICIENT_DATA.  It's likely that the problem does not
exist anymore, due to the bug fixes mentioned in comment #8.

Note You need to log in before you can comment on or make changes to this bug.