Escalated to Bugzilla from IssueTracker
Event posted on 06-22-2009 06:17am EDT by pkeloth Description of problem: ======================= Sosreport give below warning though fsid is specified in 'fs' resource. * one or more nfs export do not have a fsid attribute set. In the script I could below lines which cause this message. /usr/lib/python2.4/site-packages/sos/plugins/cluster.py <<snip>> # check for fs exported via nfs without nfsid attribute if len(xpathContext.xpathEval("/cluster/rm/service//fs[not(@fsid)]/nfsexport")): self.addDiagnose("one or more nfs export do not have a fsid attribute set.") <<snip>> Here it is parsing through tag's in '/etc/cluster/cluster.conf' as below. /cluster/rm/service//fs[not(@fsid)]/nfsexport cluster -> rm -> service -> fs -> then it check for the fsid. If you have 'fs' as a shared resource, the 'fsid' won't be there in the 'service' tag but it will be in 'resources' tag in cluster.conf. So if you have the service configured as below(which is recommended in our documentation also) it gives you this warning. 1. I have run sosreport with below entry in service section and you could see same warning message. <service autostart="1" name="Test-Service"> <fs ref="Test-FS"> <nfsexport ref="NFS-EXPORT"> <nfsclient ref="NFS-Client"/> </nfsexport> </fs> <ip ref="10.65.7.177"/> </service> This is snip when I run sosreport. <<snip>> One or more plugins have detected a problem in your configuration. Please review the following messages: cluster: * one or more nfs export do not have a fsid attribute set. Are you sure you would like to continue (y/n) ? y Please enter your first initial and last name [Cluster1]: <</snip>> 2. I have made below changes in the configuration then it doesn't show me that warning. <service autostart="1" name="Test-Service"> <fs device="/dev/TestVG/TestLV" force_fsck="0" force_unmount="1" fsid="13388" fstype="ext3" mountpoint="/test" name="TestFS" options="rw" self_fence="0"> <nfsexport ref="NFS-EXPORT"> <nfsclient ref="NFS-Client"/> </nfsexport> </fs> <ip ref="10.65.7.177"/> </service> <</snip>> This process may take a while to complete. No changes will be made to your system. Press ENTER to continue, or CTRL-C to quit. Please enter your first initial and last name [Cluster1]: <</snip>> How reproducible: ================= Always Steps to Reproduce: =================== Use shared fs resource in service section. Actual results: =============== Give warning though fsid is mentioned. Expected results: ================= Should not show this warning if fsid is mentioned. This event sent from IssueTracker by sbradley [Support Engineering Group] issue 309625
Event posted on 06-23-2009 01:40pm EDT by sbradley File uploaded: cluster.py.patch This event sent from IssueTracker by sbradley [Support Engineering Group] issue 309625 it_file 231639
Event posted on 06-23-2009 01:40pm EDT by sbradley Wrote a patch to resolve this issue, here is updated function for: cluster.py I will attach the patch to resolve this. --sbradley This event sent from IssueTracker by sbradley [Support Engineering Group] issue 309625
Created attachment 349123 [details] cluster.py patch with fix
~~ Attention Customers and Partners - RHEL 5.5 Beta is now available on RHN ~~ RHEL 5.5 Beta has been released! There should be a fix present in this release that addresses your request. Please test and report back results here, by March 3rd 2010 (2010-03-03) or sooner. Upon successful verification of this request, post your results and update the Verified field in Bugzilla with the appropriate value. If you encounter any issues while testing, please describe them and set this bug into NEED_INFO. If you encounter new defects or have additional patch(es) to request for inclusion, please clone this bug per each request and escalate through your support representative.
Hello Shane, If possible I'll need an answer on this by EOB. """ not sure it this actually is a bug because I'm not able to get my hand on any real cluster.conf's. But it seems that with the patch sos lost the ability to detect missing fsid in fs elements where there is no ref attribute. """ Thanks, Adam
Yeah just a problem with the patch. Current version does work for shared resources, but does not work with private resources. Ref tags will detect missing attributes, but not ref tags will not. This fails: <service autostart="0" name="demo1nfs" recovery="disable"> <ip address="192.168.1.55" monitor_link="1"/> <fs device="/dev/sda1" force_fsck="0" force_unmount="1" fstype="ext3" mountpoint="/media/demo1" name="demo1EXT3fs" options="" self_fence="0"> <nfsexport name="nfsdemo1export"> <nfsclient name="demo1client" options="rw" path="/nfs" target="192.168.1.0/24"/> </nfsexport> </fs> </service> --sbradley
Here is the fix that works in all 4 possible cases. 1) ref with fsid tag (public resource) 2) ref without fsid tag (public resource) 3) no ref with fsid tag (private resource) 4) no ref without fsid tag (private resource) # check for fs exported via nfs without nfsid attribute if len(xpathContext.xpathEval("/cluster/rm/service//fs[not(@fsid)]/nfsexport")): for xmlNode in xpathContext.xpathEval("/cluster/rm/service//fs[not(@fsid)]"): fsRefAttribute = xmlNode.xpathEval("@ref") if (len(fsRefAttribute) > 0) : fsRefName = fsRefAttribute[0].content if len(xpathContext.xpathEval("cluster/rm/resources/fs[@name='%s'][not(@fsid)]" % fsRefName)): self.addDiagnose("one or more nfs export do not have a fsid attribute set.") break else: self.addDiagnose("one or more nfs export do not have a fsid attribute set.") break # cluster.conf file version and the in-memory cluster configuration version matches
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2010-0201.html