On a shared root fs you cant get a lockdump by doing a gfs_tool lockdump / because /proc/mounts looks like this rootfs on / type rootfs (rw) none on /dev type tmpfs (rw) /dev/vg_zhlr421_sr/lv_sharedroot on / type gfs (rw,noatime,nodiratime) This patch resolves the problem.
Created attachment 236261 [details] patch to fix the problem.
I'll take it.
Deferring to 4.7 as per discussion with Josef.
Same problem on RHEL5 with gfs{,2}_tool?
This is applicable to RHEL5 GFS1 and I opened bug #359331 for it. I don't know for sure if there is a similar problem with GFS2 but the code is completely different. I assume not, but it's worth testing. I don't currently have a node set up with gfs2 as its root, so it's hard to tell. Perhaps Josef can try it?
Hey Josef: The output from your opening comments looks more like what the mount command prints. It doesn't look like /proc/mounts to me. Can you cat /proc/mounts once and paste it in a comment? I don't have a gfs shared root system, and I'm trying to understand the differences more completely.
ÀÄ(%:~/sysreports/134848)Ä- cat zhlr421a/proc/mounts rootfs / rootfs rw 0 0 none /dev tmpfs rw 0 0 /dev/vg_zhlr421_sr/lv_sharedroot / gfs rw,noatime,nodiratime 0 0 /dev/vg_zhlr421_sr/lv_sharedroot /cdsl.local gfs rw,noatime,nodiratime 0 0 none /dev tmpfs rw 0 0 /proc /proc proc rw,nodiratime 0 0 /proc/bus/usb /proc/bus/usb usbfs rw 0 0 /sys /sys sysfs rw 0 0 none /dev/pts devpts rw 0 0 none /dev/shm tmpfs rw 0 0 /dev/vg_local/lv_tmp /tmp ext3 rw 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0 sunrpc /cluster/shared/var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0 /dev/vg_scratch/lv_scratch /scratch gfs rw,noatime,nodiratime 0 0 /dev/vg_products/lv_products /products gfs rw,noatime,nodiratime 0 0 /dev/vg_zhlr421_sr/lv_sharedroot /oracle gfs rw,noatime,nodiratime 0 0 none /tmp/fence_tool/dev/pts devpts rw 0 0 /dev/vg_P04user/lv_P04user /cluster/mount/vg_P04user/lv_P04user gfs rw,noatime,nodiratime 0 0 /dev/vg_P04user/lv_P04user /usr/sap/E04 gfs rw,noatime,nodiratime 0 0 /dev/vg_P04user/lv_P04user /sapdb gfs rw,noatime,nodiratime 0 0 /dev/vg_P04user/lv_P04user /oracle/P04 gfs rw,noatime,nodiratime 0 0 /dev/vg_P04sap1/lv_sap1 /cluster/mount/vg_P04sap1/lv_sap1 gfs rw,noatime,nodiratime 0 0 /dev/vg_P04sap1/lv_sap1 /sapmnt/E04 gfs rw,noatime,nodiratime 0 0 /dev/vg_P04origlogA/lv_P04origlogA /oracle/P04/origlogA gfs rw,noatime,nodiratime 0 0 /dev/vg_P04origlogB/lv_P04origlogB /oracle/P04/origlogB gfs rw,noatime,nodiratime 0 0 /dev/vg_P04origlogC/lv_P04origlogC /oracle/P04/origlogC gfs rw,noatime,nodiratime 0 0 /dev/vg_P04origlogD/lv_P04origlogD /oracle/P04/origlogD gfs rw,noatime,nodiratime 0 0 /dev/vg_P04mirrlogA/lv_P04mirrlogA /oracle/P04/mirrlogA gfs rw,noatime,nodiratime 0 0 /dev/vg_P04mirrlogB/lv_P04mirrlogB /oracle/P04/mirrlogB gfs rw,noatime,nodiratime 0 0 /dev/vg_P04mirrlogC/lv_P04mirrlogC /oracle/P04/mirrlogC gfs rw,noatime,nodiratime 0 0 /dev/vg_P04mirrlogD/lv_P04mirrlogD /oracle/P04/mirrlogD gfs rw,noatime,nodiratime 0 0 /dev/vg_P04lc3log/lv_P04lc3log /sapdb/L03/log gfs rw,noatime,nodiratime 0 0 /dev/vg_P04lc3data/lv_P04lc3data /sapdb/L03/data gfs rw,noatime,nodiratime 0 0 /dev/vg_P04lc3arch/lv_P04lc3arch /sapdb/L03/arch gfs rw,noatime,nodiratime 0 0 /dev/vg_P04lc3bck/lv_P04lc3bck /sapdb/L03/backup gfs rw,noatime,nodiratime 0 0 /dev/vg_P04data1/lv_P04data1 /oracle/P04/sapdata1 gfs rw,noatime,nodiratime 0 0 /dev/vg_P04data2/lv_P04data2 /oracle/P04/sapdata2 gfs rw,noatime,nodiratime 0 0 /dev/vg_P04data3/lv_P04data3 /oracle/P04/sapdata3 gfs rw,noatime,nodiratime 0 0 /dev/vg_P04data4/lv_P04data4 /oracle/P04/sapdata4 gfs rw,noatime,nodiratime 0 0 10.226.3.20:/usr/sap/trans/EPS /usr/sap/E04/trans/EPS nfs rw,v3,rsize=32768,wsize=32768,hard,tcp,lock,proto=tcp,timeo=600,retrans=5,addr=10.226.3.20 0 0 10.226.3.20:/usr/sap/trans/ext_preprod_P04 /usr/sap/E04/trans/ext_preprod_P04 nfs rw,v3,rsize=32768,wsize=32768,hard,tcp,lock,proto=tcp,timeo=600,retrans=5,addr=10.226.3.20 0 0
I believe that co-requisite bug #431945 and bug #421761 should solve this issue. My plan is to do the fixes for both of them and make sure that they solve this problem as well.
I'm still waiting to hear if the fixes referred to in comment #8 solve this problem. I spoke to Toure who said he was waiting for work from the customer. So I'm putting this into NEEDINFO and adding him to the cc list.
I think this can be closed now?
Closing as INSUFFICIENT_DATA. It's likely that the problem does not exist anymore, due to the bug fixes mentioned in comment #8.