RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 869735 - CMAN: "Relax-NG validity error" on valid cluster.conf file
Summary: CMAN: "Relax-NG validity error" on valid cluster.conf file
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: cluster
Version: 6.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Fabio Massimo Di Nitto
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-10-24 16:48 UTC by Jonathan Earl Brassow
Modified: 2012-12-07 10:11 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-10-25 06:26:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 871603 0 low CLOSED ccs_tool: wrong parameter(s) to fence device in "ccs_tool create" help 2021-02-22 00:41:40 UTC

Internal Links: 871603

Description Jonathan Earl Brassow 2012-10-24 16:48:29 UTC
Cluster still works, but error messages are really annoying...

Steps to reproduce:
1) Create a cluster with a fencing device.  In fact, use the EXACT example given when you issue a 'ccs_tool create' command with no cluster name, as follows:
[root@bp-01 ~]# ccs_tool create
Usage: ccs_tool create [-2] <clustername>
<snip/>
eg:
  ccs_tool create MyCluster
  ccs_tool addfence apc fence_apc ipaddr=apc.domain.net user=apc password=apc
  ccs_tool addnode node1 -n 1 -f apc port=1
  ccs_tool addnode node2 -n 2 -f apc port=2
  ccs_tool addnode node3 -n 3 -f apc port=3
  ccs_tool addnode node4 -n 4 -f apc port=4
<snip/>

2) Then attempt to start the cluster:
[root@bp-01 ~]# service cman start
Starting cluster: 
   Checking if cluster has been disabled at boot...        [  OK  ]
   Checking Network Manager...                             [  OK  ]
   Global setup...                                         [  OK  ]
   Loading kernel modules...                               [  OK  ]
   Mounting configfs...                                    [  OK  ]
   Starting cman... Relax-NG validity error : Extra element fencedevices in interleave
tempfile:33: element fencedevices: Relax-NG validity error : Element cluster failed to validate content
Configuration fails to validate
                                                           [  OK  ]
   Waiting for quorum...                                   [  OK  ]
   Starting fenced...                                      [  OK  ]
   Starting dlm_controld...                                [  OK  ]
   Starting gfs_controld...                                [  OK  ]
   Unfencing self...                                       [  OK  ]
   Joining fence domain...                                 [  OK  ]

Comment 1 Fabio Massimo Di Nitto 2012-10-24 16:50:51 UTC
Can you please attach generated cluster.conf so we can identify what goes wrong?

Comment 4 Fabio Massimo Di Nitto 2012-10-25 06:26:24 UTC
  ccs_tool addfence apc fence_apc ipaddr=apc.domain.net user=apc password=apc

is incorrect.

fence_apc does not recognize user/password as options. They should be login/passwd and the configuration will validate.

I left an example on bp-01:/root/cluster.conf.test:

[root@bp-01 ~]# ccs_config_validate -l cluster.conf.test 
Configuration validates


Note You need to log in before you can comment on or make changes to this bug.