Bug 1300014 - validation failure in pcs2pcscmd due to newer schema of the CIB
validation failure in pcs2pcscmd due to newer schema of the CIB
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: clufter (Show other bugs)
6.8
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: Jan Pokorný
cluster-qe@redhat.com
:
Depends On:
Blocks: 1264795 1269964 1343661
  Show dependency treegraph
 
Reported: 2016-01-19 13:35 EST by Miroslav Lisik
Modified: 2016-10-11 15:36 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: No Doc Update
Doc Text:
With {cib,pcs}2pcscmd* commands, clufter no longer chokes on validation failures (unless --nocheck provided) due to source CIB file using newer "validate-with" validation version specification than the only supported so far (pacemaker-1.2.rng) or possibly using a syntax not compatible with that; now also 2.0, 2.3, 2.4, and 2.5 versions are supported.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-03-16 11:43:54 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
configuration file cluster.conf for CMAN+pacemaker cluster stack (834 bytes, text/plain)
2016-01-19 13:37 EST, Miroslav Lisik
no flags Details

  None (edit)
Description Miroslav Lisik 2016-01-19 13:35:08 EST
Description of problem:
After executing the `clufter pcs2pcscmd` command the cib validation fails, and clufter produces warnings with error exit status 1.


Version-Release number of selected component (if applicable):
clufter-cli-0.55.0-3.el6.noarch
python-clufter-0.55.0-3.el6.x86_64
pacemaker-1.1.14-0.4_rc5.el6.x86_64

How reproducible:
always


Steps to Reproduce:
1. Create testing configuration files: cluster.conf and cib.xml (see the attachment)
2. Run clufter command:

[root@virt-176 ~]# clufter pcs2pcscmd --ccs="cluster.conf" --cib="cib.xml"
[ccspcmk2pcscmd     ] xslt: NOTE: cluster infrastructure services not enabled at this point, which can be changed any time by issuing `pcs cluster enable --all`
WARNING:clufter.format:Invalid as per RNG file `/usr/lib/python2.6/site-packages/clufter/formats/cib/pacemaker-1.2.rng'
WARNING:clufter.format:None of the validation attempts succeeded with validator spec `('/usr/lib/python2.6/site-packages/clufter/formats/cib/pacemaker-1.2.rng',)' 
cib: Validation: 1:0:Element cib failed to validate content

Actual results:
Clufter exited with error exit status (1), and produced warning messages instead of a list of pcs commands.

Expected results:
Clufter exited with exit status 0 and produced a list of pcs commands.


Additional info:
Comment 1 Miroslav Lisik 2016-01-19 13:37 EST
Created attachment 1116313 [details]
configuration file cluster.conf for CMAN+pacemaker cluster stack
Comment 2 Miroslav Lisik 2016-01-19 13:38 EST
Created attachment 1116314 [details]
cib.xml
Comment 3 Jan Pokorný 2016-01-21 11:51:23 EST
This is an issue with clufter carrying just a single schema that is only
compatible up to/at pacemaker-1.2 version defined as "validate-width"
attribute of the top-level cib tag.

Have to check how to deal with this (perhaps higher-bound constrained)
forward compatibility.
Comment 4 Jan Pokorný 2016-02-09 13:20:04 EST
Rather simplified approach (for now) is now present at "next" branch.

Proper upstream release + rebase to follow.
Comment 5 Jan Pokorný 2016-02-09 13:43:24 EST
Note that previously, one could you a workaround like this (explicitly
skipping the validation):

  clufter pcs2pcscmd --nocheck --ccs="cluster.conf" --cib="cib.xml"
                     ^^^^^^^^^
Comment 7 Jan Pokorný 2016-03-16 11:43:54 EDT
This is already fixed in clufter-0.56.1-1.el6.
Comment 8 Miroslav Lisik 2016-03-16 11:59:54 EDT
Tested version with version clufter-0.56.1-1.el6 and attached files.

[root@virt-010 clufter-1116313]# clufter pcs2pcscmd -i cib.xml -c cluster.conf -s -g
[ccspcmk2pcscmd     ] xslt: NOTE: cluster infrastructure services not enabled at this point, which can be changed any time by issuing `pcs cluster enable --all`
[cib2pcscmd         ] xslt: WARNING: dropping non-whitelisted cluster property: `dc-version`
[cib2pcscmd         ] xslt: WARNING: dropping non-whitelisted cluster property: `cluster-infrastructure`
[cib2pcscmd         ] xslt: WARNING: dropping non-whitelisted cluster property: `last-lrm-refresh`
pcs cluster auth virt-176 virt-177 virt-178
pcs cluster setup --start --name STSRHTS25475 \
  virt-176 virt-177 virt-178 --token 3000
sleep 60
pcs cluster cib tmp-cib.xml --config
pcs -f tmp-cib.xml stonith create fence-virt-176 fence_xvm \
  action=reboot debug=1 pcmk_host_check=static-list \
  pcmk_host_list=virt-176 \
  pcmk_host_map=virt-176:virt-176.cluster-qe.lab.eng.brq.redhat.com \
  op monitor id=fence-virt-176-monitor-interval-60s interval=60s \
  name=monitor
pcs -f tmp-cib.xml stonith create fence-virt-177 fence_xvm \
  action=reboot debug=1 pcmk_host_check=static-list \
  pcmk_host_list=virt-177 \
  pcmk_host_map=virt-177:virt-177.cluster-qe.lab.eng.brq.redhat.com \
  op monitor id=fence-virt-177-monitor-interval-60s interval=60s \
  name=monitor
pcs -f tmp-cib.xml stonith create fence-virt-178 fence_xvm \
  action=reboot debug=1 pcmk_host_check=static-list \
  pcmk_host_list=virt-178 \
  pcmk_host_map=virt-178:virt-178.cluster-qe.lab.eng.brq.redhat.com \
  op monitor id=fence-virt-178-monitor-interval-60s interval=60s \
  name=monitor
pcs -f tmp-cib.xml resource create ip ocf:heartbeat:IPaddr2 \
  ip=10.34.70.74 cidr_netmask=23 \
  op start interval=0s timeout=20s stop interval=0s timeout=20s \
  monitor interval=10s timeout=20s
pcs -f tmp-cib.xml resource create apache ocf:heartbeat:apache \
  op start interval=0s timeout=40s stop interval=0s timeout=60s monitor \
  interval=10 timeout=20s
pcs -f tmp-cib.xml resource group add webserver ip apache
pcs cluster cib-push tmp-cib.xml --config
[cmd-wrap           ] output: <stdout>
[root@virt-010 clufter-1116313]# echo $?
0

> List of pcs commands is produced and exit status is 0 as expected.
Comment 9 Jan Pokorný 2016-10-10 09:11:44 EDT
Note this is addressed in RHEL 7.3 through rebase ([bug 1343661]).
Comment 10 Jan Pokorný 2016-10-10 09:37:49 EDT
Note this is, even earlier, addressed in RHEL 6.8 through rebase
([bug 1269964]).

Note You need to log in before you can comment on or make changes to this bug.