Bug 680930
Summary: | [RFE]: split relaxng schema on a per package base and allow autoupdates on live system | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Fabio Massimo Di Nitto <fdinitto> | |
Component: | cluster | Assignee: | Fabio Massimo Di Nitto <fdinitto> | |
Status: | CLOSED ERRATA | QA Contact: | Cluster QE <mspqa-list> | |
Severity: | low | Docs Contact: | ||
Priority: | medium | |||
Version: | 6.1 | CC: | agk, ccaulfie, cluster-maint, djansa, lhh, rpeterso, teigland | |
Target Milestone: | rc | Keywords: | FutureFeature | |
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | cluster-3.0.12.1-15.el6 | Doc Type: | Enhancement | |
Doc Text: |
The introduction of dynamic schema generation provides a lot of flexibility for end users to plug into Red Hat Enterprise Linux High Availability Add-on custom resource and fence agents, and still retain the possibility to validate their /etc/cluster.conf configuration file against those agents. It is a strict requirement that custom agents provide correct metadata output and that the agents must be installed on all cluster nodes.
|
Story Points: | --- | |
Clone Of: | ||||
: | 917782 (view as bug list) | Environment: | ||
Last Closed: | 2011-12-06 14:50:59 UTC | Type: | --- | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 719645 | |||
Bug Blocks: | 693781, 917782 |
Description
Fabio Massimo Di Nitto
2011-02-28 15:02:04 UTC
First draft on how to implement the split: (names and paths are still up for discussion, but just to lay down the idea) 1) each package will provide, no matter how small that is, a relaxng schema in /usr/share/cluster/config_validation/$package.rng The benefit is that each single upstream can take care to update the schema independently of cman package that now ships it all. As such a developer that adds a cluster related option, can simply update the schema to reflect that. Since the file doesn't require any code change, it is a non-intrusive change to do. Customers/users/ISV can drop their snippets in there too, to allow custom bits to be validated. 2) some packages can generate their rng components at build time, others can't. This is out of scope for now. Each upstream should decide best policy to provide the .rng file. Initially, we can simply re-use a split of the current schema and worry how to generate it later on. 3) ccs_link_schema will basically ls *.rng in the defined directory to generate /var/cache/cluster/final_schema.rng that will have only <include ..> directives (previous at least xml validation of all .rng bits to avoid totally broken output). To avoid breaking compatibility with possible external tools, current relaxng could become a symlink to the autogenerated one. ccs_link_schema can be either executed manually, and by ccs_config_validate on each run to make sure to pickup latest and greatest schema all the time. http://git.fedorahosted.org/git/?p=cluster.git;a=commitdiff;h=85abc322eb508542bf2548f5fd4313ad456d9ff2 Patch now available upstream. Unit test results will follow soon. Additional testing revealed a corner case where relaxng schema failed to compile if either ras or fas are not installed on the system. In a "packaged" environment, fas is a strict dependency for cman, but ras is not. Effectively exposing the issue The patch http://git.fedorahosted.org/git/?p=cluster.git;a=commitdiff;h=cd2cab69e262fb9bb24df9ed77e456e651382266 creates STUBS for fas/ras in case they are not available. One more amendment to handle service.sh and vm.sh special case: http://git.fedorahosted.org/git/?p=cluster.git;a=commitdiff;h=28b6e18ebcd5eed8768679dc218af21ee84aed7f Unit test results: the attached testsuite is not exactly cluster aware. Run on one node at a time if installed on a shared storage at it saves temp data in pwd. It needs to be executed on a node or patched to use the correct cluster.rng pre-upgrade checks: [root@rhel6-node2 ~]# ls -las /usr/share/cluster/cluster.rng 172 -rw-r--r-- 1 root root 174383 Jun 23 11:03 /usr/share/cluster/cluster.rng normal plain text file [root@rhel6-node2 ~]# yum install java-1.6.0-openjdk (required for testsuite) unpack testsuite [root@rhel6-node2 seethe]# ./seethe.sh RHEL6 .... lots of output .... 101 files checked install updates [root@rhel6-node2 ~]# ls -las /usr/share/cluster/cluster.rng 0 lrwxrwxrwx 1 root root 28 Jul 12 08:48 /usr/share/cluster/cluster.rng -> /var/lib/cluster/cluster.rng [root@rhel6-node2 ~]# ls -las /var/lib/cluster/cluster.rng 168 -rw-r--r-- 1 root root 168912 Jul 12 08:48 /var/lib/cluster/cluster.rng note: size of the file might differ since it's generated dynamically now. compat symlink exists in /usr/share/cluster/cluster.rng new schema generated in /var/lib/cluster/cluster.rng all tools (except ccs_update_schema) will keep using /usr/share/cluster/cluster.rng [root@rhel6-node2 seethe]# ./seethe.sh RHEL6 PASS 000_cluster.conf .... lots of output .... 98 files checked 3 configs will fail. those have fence_lpar parameters that are not validated/supported in RHEL. The previous schema as shipped in older package did contain unsupported data. Pre-runtime tests: conditions: - execution order is important to test caching - use a simple config to validate via ccs_config_validate in between each point to verify that relaxng schema compiles correctly. Results from calls to ccs_config_validate have been omitted to keep output reasonable. - All tests are PASS. - at any time it is possible to verify cache age by checking file timestamps in /var/lib/cluster. 1) check that we have hot-cache after package upgrade/install [root@rhel6-node2 ~]# time ccs_update_schema real 0m0.259s user 0m0.078s sys 0m0.180s 2) force regeneration of the schema [root@rhel6-node2 ~]# time ccs_update_schema -f real 0m7.439s user 0m2.346s sys 0m5.091s 3) force regeneration of the schema and be verbose [root@rhel6-node2 ~]# ccs_update_schema -vf Generating resource-agents cache ras: checking required files ras: looking for agents ras: generating hashes ras: generating rng data ras: generating rng data ras: processing service.sh ras: processing ASEHAagent.sh ras: processing SAPDatabase ras: processing SAPInstance ras: processing apache.sh ras: processing checkquorum ras: processing clusterfs.sh ras: processing fence_scsi_check.pl ras: processing fs.sh ras: processing ip.sh ras: processing lvm.sh ras: processing lvm_by_lv.sh ras: processing lvm_by_vg.sh ras: processing mysql.sh ras: processing named.sh ras: processing netfs.sh ras: processing nfsclient.sh ras: processing nfsexport.sh ras: processing nfsserver.sh ras: processing ocf-shellfuncs ras: processing openldap.sh ras: processing oracledb.sh ras: processing orainstance.sh ras: processing oralistener.sh ras: processing postgres-8.sh ras: processing samba.sh ras: processing script.sh ras: processing svclib_nfslock ras: processing tomcat-6.sh ras: processing vm.sh ras: generating ref data ras: processing service.sh ras: processing ASEHAagent.sh ras: processing SAPDatabase ras: processing SAPInstance ras: processing apache.sh ras: processing checkquorum ras: processing clusterfs.sh ras: processing fence_scsi_check.pl ras: processing fs.sh ras: processing ip.sh ras: processing lvm.sh ras: processing lvm_by_lv.sh ras: processing lvm_by_vg.sh ras: processing mysql.sh ras: processing named.sh ras: processing netfs.sh ras: processing nfsclient.sh ras: processing nfsexport.sh ras: processing nfsserver.sh ras: processing ocf-shellfuncs ras: processing openldap.sh ras: processing oracledb.sh ras: processing orainstance.sh ras: processing oralistener.sh ras: processing postgres-8.sh ras: processing samba.sh ras: processing script.sh ras: processing svclib_nfslock ras: processing tomcat-6.sh ras: processing vm.sh Generating fence-agents cache fas: checking required files fas: looking for agents fas: generating hashes fas: generating new cache fas: processing fence_ack_manual fas: processing fence_apc fas: processing fence_apc_snmp fas: processing fence_bladecenter fas: processing fence_bladecenter_snmp fas: processing fence_brocade fas: processing fence_cisco_mds fas: processing fence_cisco_ucs fas: processing fence_drac fas: processing fence_drac5 fas: processing fence_egenera fas: processing fence_eps fas: processing fence_ibmblade fas: processing fence_ifmib fas: processing fence_ilo fas: processing fence_ilo_mp fas: processing fence_intelmodular fas: processing fence_ipmilan fas: processing fence_rhevm fas: processing fence_rsa fas: processing fence_rsb fas: processing fence_sanbox2 fas: processing fence_scsi fas: processing fence_virsh fas: processing fence_virt fas: processing fence_vmware fas: processing fence_wti fas: processing fence_xvm Building final relaxng schema Installing schema in /var/lib/cluster all done. have a nice day! [root@rhel6-node2 ~]# 4) be verbose with hotcache [root@rhel6-node2 ~]# ccs_update_schema -v Generating resource-agents cache ras: checking required files ras: looking for agents ras: generating hashes ras: using local cache Generating fence-agents cache fas: checking required files fas: looking for agents fas: generating hashes fas: using local cache Building final relaxng schema Installing schema in /var/lib/cluster all done. have a nice day! 5) check ras cache change detection [root@rhel6-node2 ~]# mv /usr/share/cluster/ASEHAagent.sh /root/ [root@rhel6-node2 ~]# ccs_update_schema -v Generating resource-agents cache ras: checking required files ras: looking for agents ras: generating hashes ras: generating rng data ras: generating rng data ras: processing service.sh ras: processing SAPDatabase ras: processing SAPInstance ras: processing apache.sh ras: processing checkquorum ras: processing clusterfs.sh ras: processing fence_scsi_check.pl ras: processing fs.sh ras: processing ip.sh ras: processing lvm.sh ras: processing lvm_by_lv.sh ras: processing lvm_by_vg.sh ras: processing mysql.sh ras: processing named.sh ras: processing netfs.sh ras: processing nfsclient.sh ras: processing nfsexport.sh ras: processing nfsserver.sh ras: processing ocf-shellfuncs ras: processing openldap.sh ras: processing oracledb.sh ras: processing orainstance.sh ras: processing oralistener.sh ras: processing postgres-8.sh ras: processing samba.sh ras: processing script.sh ras: processing svclib_nfslock ras: processing tomcat-6.sh ras: processing vm.sh ras: generating ref data ras: processing service.sh ras: processing SAPDatabase ras: processing SAPInstance ras: processing apache.sh ras: processing checkquorum ras: processing clusterfs.sh ras: processing fence_scsi_check.pl ras: processing fs.sh ras: processing ip.sh ras: processing lvm.sh ras: processing lvm_by_lv.sh ras: processing lvm_by_vg.sh ras: processing mysql.sh ras: processing named.sh ras: processing netfs.sh ras: processing nfsclient.sh ras: processing nfsexport.sh ras: processing nfsserver.sh ras: processing ocf-shellfuncs ras: processing openldap.sh ras: processing oracledb.sh ras: processing orainstance.sh ras: processing oralistener.sh ras: processing postgres-8.sh ras: processing samba.sh ras: processing script.sh ras: processing svclib_nfslock ras: processing tomcat-6.sh ras: processing vm.sh Generating fence-agents cache fas: checking required files fas: looking for agents fas: generating hashes fas: using local cache Building final relaxng schema Installing schema in /var/lib/cluster all done. have a nice day! [root@rhel6-node2 ~]# grep ASEHA /usr/share/cluster/cluster.rng [root@rhel6-node2 ~]# 6) simulate installing a custom resource agent (depends on 5 being completed) [root@rhel6-node2 ~]# mv ASEHAagent.sh /usr/share/cluster/ [root@rhel6-node2 ~]# ccs_update_schema <define name="ASEHAAGENT"> <element name="ASEHAagent" rha:description="Sybase ASE Failover Instance"> <attribute name="ref" rha:description="Reference to existing ASEHAagent resource in the resources section."/> <ref name="ASEHAAGENT"/> 7) check fas cache change detection [root@rhel6-node2 ~]# mv /usr/sbin/fence_wti /root/ [root@rhel6-node2 ~]# ccs_update_schema -v Generating resource-agents cache ras: checking required files ras: looking for agents ras: generating hashes ras: using local cache Generating fence-agents cache fas: checking required files fas: looking for agents fas: generating hashes fas: generating new cache fas: processing fence_ack_manual fas: processing fence_apc fas: processing fence_apc_snmp fas: processing fence_bladecenter fas: processing fence_bladecenter_snmp fas: processing fence_brocade fas: processing fence_cisco_mds fas: processing fence_cisco_ucs fas: processing fence_drac fas: processing fence_drac5 fas: processing fence_egenera fas: processing fence_eps fas: processing fence_ibmblade fas: processing fence_ifmib fas: processing fence_ilo fas: processing fence_ilo_mp fas: processing fence_intelmodular fas: processing fence_ipmilan fas: processing fence_rhevm fas: processing fence_rsa fas: processing fence_rsb fas: processing fence_sanbox2 fas: processing fence_scsi fas: processing fence_virsh fas: processing fence_virt fas: processing fence_vmware fas: processing fence_xvm Building final relaxng schema Installing schema in /var/lib/cluster all done. have a nice day! [root@rhel6-node2 ~]# grep wti /usr/share/cluster/cluster.rng [root@rhel6-node2 ~]# 8) simulate install of custom agent (depends on 7 being completed) [root@rhel6-node2 ~]# mv fence_wti /usr/sbin/ [root@rhel6-node2 ~]# ccs_update_schema -v Generating resource-agents cache ras: checking required files ras: looking for agents ras: generating hashes ras: using local cache Generating fence-agents cache fas: checking required files fas: looking for agents fas: generating hashes fas: generating new cache fas: processing fence_ack_manual fas: processing fence_apc fas: processing fence_apc_snmp fas: processing fence_bladecenter fas: processing fence_bladecenter_snmp fas: processing fence_brocade fas: processing fence_cisco_mds fas: processing fence_cisco_ucs fas: processing fence_drac fas: processing fence_drac5 fas: processing fence_egenera fas: processing fence_eps fas: processing fence_ibmblade fas: processing fence_ifmib fas: processing fence_ilo fas: processing fence_ilo_mp fas: processing fence_intelmodular fas: processing fence_ipmilan fas: processing fence_rhevm fas: processing fence_rsa fas: processing fence_rsb fas: processing fence_sanbox2 fas: processing fence_scsi fas: processing fence_virsh fas: processing fence_virt fas: processing fence_vmware fas: processing fence_wti fas: processing fence_xvm Building final relaxng schema Installing schema in /var/lib/cluster all done. have a nice day! [root@rhel6-node2 ~]# grep wti /usr/share/cluster/cluster.rng <!-- fence_wti --> 9) forcefully destroy cache [root@rhel6-node2 ~]# rm -rf /var/lib/cluster/* [root@rhel6-node2 ~]# ccs_config_validate Configuration validates [root@rhel6-node2 ~]# ls -als /var/lib/cluster/ total 316 4 drwxr-xr-x. 2 root root 4096 Jul 12 09:13 . 4 drwxr-xr-x. 25 root root 4096 Jul 12 08:30 .. 168 -rw-r--r-- 1 root root 168912 Jul 12 09:13 cluster.rng 76 -rw-r--r-- 1 root root 73735 Jul 12 09:13 fence_agents.rng.cache 4 -rw-r--r-- 1 root root 1968 Jul 12 09:13 fence_agents.rng.hash 56 -rw-r--r-- 1 root root 55987 Jul 12 09:13 resources.rng.cache 4 -rw-r--r-- 1 root root 2457 Jul 12 09:13 resources.rng.hash [root@rhel6-node2 ~]# cache and schema are regenerated and config validates 10) Verify man page matches tools options ccs_update_schema.8: [snip] SYNOPSIS ccs_update_schema [OPTION].. [snip] OPTIONS -h Help. Print out the usage. -V Print the version information. -v Be verbose. Mostly for debugging purposes. -f Ignore local stored cache and regenerate a fresh schema. [root@rhel6-node2 ~]# ccs_update_schema -h Usage: ccs_update_schema [options] Options: -h Print this help, then exit -V Print program version information, then exit -v Produce verbose output -f Force schema regeneration and ignore cache [root@rhel6-node2 ~]# ccs_update_schema -V ccs_update_schema version 3.0.12.1 -v and -f have been verified above. use unknown option [root@rhel6-node2 ~]# ccs_update_schema -g getopt: invalid option -- 'g' Usage: [snip] 11) verify change to ccs_config_validate -u option (harsh test, but effective ;)) from ccs_config_validate man page: Advanced options: -u Do not update relaxng schema (see ccs_update_schema.8) [root@rhel6-node2 ~]# rm -f /var/lib/cluster/* [root@rhel6-node2 ~]# ccs_config_validate -u Unable to verify a configuration without relaxng schema 12) verify ras stub entries cman does not require resource-agents to run. It is not a strict dependency. If resource-agents is not installed on the system, ccs_update_schema should continue to work without breaking the relaxng [root@rhel6-node2 ~]# rpm -e --nodeps resource-agents (quick and dirty, but effective ;)) [root@rhel6-node2 ~]# ccs_update_schema -v Generating resource-agents cache ras: checking required files ras: cannot find rng files. Creating stubs Generating fence-agents cache fas: checking required files fas: looking for agents fas: generating hashes fas: using local cache Building final relaxng schema Installing schema in /var/lib/cluster all done. have a nice day! [root@rhel6-node2 ~]# (assuming a config without <rm>) [root@rhel6-node2 ~]# ccs_config_validate Configuration validates [root@rhel6-node2 ~]# yum install resource-agents 13) verify fas stub entries (optional test) cman requires fence-agents in packaged format (rpm Requires: fence-agents). This test is totally optional to verify that fence stub entries are installed if fas is missing [root@rhel6-node2 ~]# rpm -e --nodeps fence-agents fence-virt [root@rhel6-node2 ~]# ccs_update_schema -v Generating resource-agents cache [snip] Generating fence-agents cache fas: checking required files fas: cannot find rng files. Creating stubs Building final relaxng schema Installing schema in /var/lib/cluster all done. have a nice day! [root@rhel6-node2 ~]# ----- Runtime tests: 1) "ideal" start [root@rhel6-node2 ~]# ccs_update_schema (this step is unnecessary) [root@rhel6-node2 ~]# /etc/init.d/cman start Starting cluster: Checking if cluster has been disabled at boot... [ OK ] Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... [ OK ] Starting qdiskd... [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... [ OK ] Joining fence domain... [ OK ] [root@rhel6-node2 ~]# 2) "raw" start [root@rhel6-node2 ~]# rm -f /var/lib/cluster/* [root@rhel6-node2 ~]# /etc/init.d/cman start Starting cluster: Checking if cluster has been disabled at boot... [ OK ] Checking Network Manager... [ OK ] Global setup... [ OK ] Loading kernel modules... [ OK ] Mounting configfs... [ OK ] Starting cman... [ OK ] Starting qdiskd... [ OK ] Waiting for quorum... [ OK ] Starting fenced... [ OK ] Starting dlm_controld... [ OK ] Starting gfs_controld... [ OK ] Unfencing self... [ OK ] Joining fence domain... [ OK ] [root@rhel6-node2 ~]# 3) simulate an error by removing the fence agent configured in cluster.conf and try to reload config at runtime (bump config_version) [root@rhel6-node2 ~]# mv /usr/sbin/fence_xvm /usr/sbin/fence_virt /root/ [root@rhel6-node2 ~]# cman_tool version -S -r Relax-NG validity error : Extra element fence in interleave tempfile:4: element clusternodes: Relax-NG validity error : Element clusternode failed to validate content tempfile:5: element clusternode: Relax-NG validity error : Element clusternodes has extra content: clusternode Configuration fails to validate cman_tool: Not reloading, configuration is not valid [root@rhel6-node2 ~]# 4) install the agent and reload the config (depends on 3) [root@rhel6-node2 ~]# mv fence_* /usr/sbin/ [root@rhel6-node2 ~]# cman_tool version -S -r [root@rhel6-node2 ~]# cman_tool version 6.2.0 config 2 [root@rhel6-node2 ~]# Patch verification from official brew build https://brewweb.devel.redhat.com/taskinfo?taskID=3481017 + echo 'Patch #11 (bz680930-1-ccs_add_dynamic_relaxng_schema_generation.patch):' Patch #11 (bz680930-1-ccs_add_dynamic_relaxng_schema_generation.patch): + /bin/cat /builddir/build/SOURCES/bz680930-1-ccs_add_dynamic_relaxng_schema_generation.patch + /usr/bin/patch -s -p1 -b --suffix .bz680930-1-ccs_add_dynamic_relaxng_schema_generation --fuzz=0 + echo 'Patch #12 (bz680930-2-ccs_relax_requirements_on_fas_and_ras.patch):' Patch #12 (bz680930-2-ccs_relax_requirements_on_fas_and_ras.patch): + /bin/cat /builddir/build/SOURCES/bz680930-2-ccs_relax_requirements_on_fas_and_ras.patch + /usr/bin/patch -s -p1 -b --suffix .bz680930-2-ccs_relax_requirements_on_fas_and_ras --fuzz=0 + echo 'Patch #13 (bz680930-3-ccs_special_case_service_and_vm_relaxng.patch):' Patch #13 (bz680930-3-ccs_special_case_service_and_vm_relaxng.patch): + /bin/cat /builddir/build/SOURCES/bz680930-3-ccs_special_case_service_and_vm_relaxng.patch + /usr/bin/patch -s -p1 -b --suffix .bz680930-3-ccs_special_case_service_and_vm_relaxng --fuzz=0 Technical note added. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. New Contents: The introduction of dynamic schema generation provides a lot of flexibility for end users to plug into RHEL HA Add-on custom resource and fence agents, and still retain the possibility to validate their cluster.conf against those agents. It is a strict requirement that custom agents will provide correct metadata output and the agents must be installed on all cluster nodes. Setting back to ASSIGNED, I found some other issues while debugging #733424. Specifically the temp directory used to generate the schema is never removed from the system, filling up /tmp. Unit tests described in comment#10 are not changed. http://git.fedorahosted.org/git/?p=cluster.git;a=commitdiff;h=04d7eb349f6c0625748a47b4749562cf945c78d0 Extra unit test results from this patch change: pre-patch 1) make sure no tmp* files are in /tmp 2) ccs_update_schema -f (force it for fun and profit) 3) ls -las /tmp/tmp.* will show a temp directory containing relaxng info [root@clusternet-node2 tmp]# ls ks-script-4jT855 ks-script-4jT855.log yum.log [root@clusternet-node2 tmp]# ccs_update_schema -f [root@clusternet-node2 tmp]# ls ks-script-4jT855 ks-script-4jT855.log tmp.2UrSnlgzJ8 tmp.WVk5iG2PGR yum.log post-patch 1) make sure no tmp* files are in /tmp 2) ccs_update_schema -f (force it for fun and profit) 3) ls -las /tmp/tmp.* is now clean [root@clusternet-node2 tmp]# ls ks-script-4jT855 ks-script-4jT855.log yum.log [root@clusternet-node2 tmp]# ccs_update_schema -f [root@clusternet-node2 tmp]# ls ks-script-4jT855 ks-script-4jT855.log yum.log Technical note updated. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. Diffed Contents: @@ -1 +1 @@ -The introduction of dynamic schema generation provides a lot of flexibility for end users to plug into RHEL HA Add-on custom resource and fence agents, and still retain the possibility to validate their cluster.conf against those agents. It is a strict requirement that custom agents will provide correct metadata output and the agents must be installed on all cluster nodes.+The introduction of dynamic schema generation provides a lot of flexibility for end users to plug into Red Hat Enterprise Linux High Availability Add-on custom resource and fence agents, and still retain the possibility to validate their /etc/cluster.conf configuration file against those agents. It is a strict requirement that custom agents provide correct metadata output and that the agents must be installed on all cluster nodes. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2011-1516.html |