Bug 156865
Summary: | another case where GUI doesn't validate it's own output | ||
---|---|---|---|
Product: | [Retired] Red Hat Cluster Suite | Reporter: | Corey Marthaler <cmarthal> |
Component: | redhat-config-cluster | Assignee: | Jim Parsons <jparsons> |
Status: | CLOSED NEXTRELEASE | QA Contact: | Cluster QE <mspqa-list> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 4 | CC: | cluster-maint |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2005-06-13 22:10:37 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Corey Marthaler
2005-05-04 19:25:30 UTC
Fixed in 0.9.49 jbrassow found another case where there's a validity checking error in versions -57 and -58: /etc/cluster/cluster.conf:6: element fence: Relax-NG validity error : Element clusternode has extra content: fence /etc/cluster/cluster.conf:5: element clusternode: Relax-NG validity error : Element clusternodes has extra content: clusternode /etc/cluster/cluster.conf:2: element cluster: Relax-NG validity error : Invalid sequence in interleave /etc/cluster/cluster.conf:2: element cluster: Relax-NG validity error : Expecting an element gulm, got nothing /etc/cluster/cluster.conf:2: element cluster: Relax-NG validity error : Invalid sequence in interleave /etc/cluster/cluster.conf:2: element cluster: Relax-NG validity error : Element cluster failed to validate content /etc/cluster/cluster.conf:8: element device: validity error : IDREF attribute name references an unknown ID "APC" /etc/cluster/cluster.conf fails to validate file in question: <?xml version="1.0" ?> <cluster config_version="2" name="alpha_cluster"> <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/> <clusternodes> <clusternode name="tng1-1" votes="1"> <fence> <method name="1"> <device name="APC" port="0" switch="0"/> </method> </fence> </clusternode> <clusternode name="tng1-2" votes="1"/> <clusternode name="tng1-3" votes="1"/> </clusternodes> <cman/> <fencedevices> <fencedevice agent="fence_apc" ipaddr="tng1-apc" login="apc" name="APC" passwd="aqpc"/> </fencedevices> <rm> <failoverdomains/> <resources/> </rm> </cluster> Fixed in 0.9.60 fix verified in -60. another case in -62: /etc/cluster/cluster.conf:5: element fence: Relax-NG validity error : Element clusternode has extra content: fence /etc/cluster/cluster.conf:4: element clusternode: Relax-NG validity error : Element clusternodes has extra content: clusternode /etc/cluster/cluster.conf:2: element cluster: Relax-NG validity error : Invalid sequence in interleave /etc/cluster/cluster.conf:2: element cluster: Relax-NG validity error : Expecting an element gulm, got nothing /etc/cluster/cluster.conf:2: element cluster: Relax-NG validity error : Invalid sequence in interleave /etc/cluster/cluster.conf:2: element cluster: Relax-NG validity error : Element cluster failed to validate content /etc/cluster/cluster.conf:7: element device: validity error : IDREF attribute name references an unknown ID "apc" /etc/cluster/cluster.conf fails to validate [root@morph-04 tmp]# cat /etc/cluster/cluster.conf <?xml version="1.0"?> <cluster config_version="12" name="morph-GULM-cluster"> <clusternodes> <clusternode name="morph-04.lab.msp.redhat.com" votes="1"> <fence> <method name="single"> <device name="apc" port="4" switch="1"/> </method> </fence> <multicast addr="88.88.888.888" interface="eth0"/> </clusternode> <clusternode name="morph-05.lab.msp.redhat.com" votes="1"> <fence> <method name="single"> <device name="apc" port="5" switch="1"/> </method> </fence> <multicast addr="88.88.888.888" interface="eth0"/> </clusternode> <clusternode name="jmhmhjm" votes="1"> <multicast addr="88.88.888.888" interface="eth0"/> </clusternode> </clusternodes> <fencedevices> <fencedevice agent="fence_apc" ipaddr="morph-apc" login="apc" name="apc" passwd="apc"/> </fencedevices> <rm> <failoverdomains> <failoverdomain name="sdjfhsdklfjsd" ordered="0" restricted="0"> <failoverdomainnode name="morph-04.lab.msp.redhat.com" priority="1"/> <failoverdomainnode name="morph-05.lab.msp.redhat.com" priority="1"/> </failoverdomain> </failoverdomains> <resources> <clusterfs device="dfgdfgdfgdf" fstype="gfs" mountpoint="gdfg" name="dfgd" options="dfgdfg"/> </resources> <service domain="sdjfhsdklfjsd" exclusive="1" name="dkjfslkserviceskdfl"> <clusterfs device="vbcbvc" fstype="gfs" mountpoint="cvbc" name="cdbc" options="vbcv"> <clusterfs device="bcvbc" fstype="gfs" mountpoint="cvbcv" name="cvbcb" options="vb"/> </clusterfs> </service> <service domain="sdjfhsdklfjsd" exclusive="1" name="xcvxc"> <clusterfs device="vxcvx" fstype="gfs" mountpoint="xcvxc" name="xcv" options="cv"/> </service> <service domain="sdjfhsdklfjsd" name="SERVICE"> <clusterfs device="fgdfgdfgdfgdfg" fstype="gfs" mountpoint="dfgd" name="gffgdfg"options="dfgd"/> </service> </rm> <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/> <cman> <multicast addr="88.88.888.888"/> </cman> </cluster> Fixed in 0.9.64 another case: Relax-NG validity error : Extra element rm in interleave /etc/cluster/cluster.conf:34: element rm: Relax-NG validity error : Element cluster failed to validate content /etc/cluster/cluster.conf fails to validate config file: [root@morph-01 tmp]# cat /etc/cluster/cluster.conf <?xml version="1.0" ?> <cluster config_version="6" name="alpha_cluster"> <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/> <clusternodes> <clusternode name="ngnghn" votes="1"> <fence> <method name="1"/> <method name="2"/> <method name="3"/> <method name="4"/> <method name="5"/> <method name="6"> <device name="ghngh" port="ryj" switch="ryujryuj"/> </method> </fence> </clusternode> <clusternode name="ghngng" votes="1"/> <clusternode name="gnghng" votes="1"> <fence> <method name="1"/> <method name="2"/> <method name="3"> <device name="ghngh" port="ryujryu" switch="yur"/> </method> <method name="4"/> </fence> </clusternode> </clusternodes> <cman/> <fencedevices> <fencedevice agent="fence_apc" ipaddr="gnhng" login="nghngh" name="ghngh" passwd="ngh"/> <fencedevice agent="fence_apc" ipaddr="___" login="___" name="___" passwd="___"/> </fencedevices> <rm> <failoverdomains> <failoverdomain name="nghnghn" ordered="0" restricted="0"/> <failoverdomain name="m,fhj" ordered="0" restricted="0"> <failoverdomainnode name="ngnghn" priority="1"/> <failoverdomainnode name="ghngng" priority="1"/> </failoverdomain> </failoverdomains> <resources> <clusterfs device="hnghngh" fstype="gfs" mountpoint="nghng" name="ghngh" options="nghn"/> </resources> <service name="nghnghng"/> <service domain="m,fhj" name="fjkryujrk"> <clusterfs ref="ghngh"/> <clusterfs device="jryujr" fstype="gfs" mountpoint="yujryu" name="yujr" options="r"> <nfsexport name="ryujrryurjy"> <clusterfs device="ujryujryuj" fstype="gfs" mountpoint="yujry" name="ryjur" options=""/> </nfsexport> </clusterfs> <clusterfs ref="ghngh"/> </service> </rm> </cluster> Fixed in 0.9.70-1.0 fix verified. |