Bug 1264795
Summary: | better integration with standalone (unbundled) clufter package for cluster configuration conversion | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Miroslav Lisik <mlisik> | ||||
Component: | pcs | Assignee: | Tomas Jelinek <tojeline> | ||||
Status: | CLOSED ERRATA | QA Contact: | cluster-qe <cluster-qe> | ||||
Severity: | medium | Docs Contact: | Milan Navratil <mnavrati> | ||||
Priority: | high | ||||||
Version: | 6.8 | CC: | cluster-maint, cluster-qe, djansa, fdinitto, idevat, jharriga, jpokorny, jruemker, mnavrati, rbinkhor, rsteiger, snagar, tojeline | ||||
Target Milestone: | rc | ||||||
Target Release: | --- | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | pcs-0.9.148-3.el6 | Doc Type: | Release Note | ||||
Doc Text: |
*pcs* now supports exporting a cluster configuration to a list of "pcs" commands
With this update, the "pcs config export" command can be used to export a cluster configuration to a list of "pcs" commands. Also, the "pcs config import-cman" command, which converts a CMAN cluster configuration to a Pacemaker cluster configuration, can now output a list of "pcs" commands that can be used to create the Pacemaker cluster configuration file. As a result, the user can determine what commands can be used to set up a cluster based on its configuration files.
|
Story Points: | --- | ||||
Clone Of: | 1212904 | Environment: | |||||
Last Closed: | 2016-05-10 19:26:56 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | 1133897, 1212904, 1212909, 1269964, 1300014 | ||||||
Bug Blocks: | 596327 | ||||||
Attachments: |
|
Description
Miroslav Lisik
2015-09-21 08:53:02 UTC
Items 2 and 3 from the checklist did not make it to 7.2. Created attachment 1082443 [details]
proposed fix
'pcs config import cman' command now supports two new output formats: pcs-commands and pcs-commands-verbose, which produce list of pcs commands and save it to specified file.
Also a new command, 'pcs config export', has been implemented, which produces a list of pcs commands from the current running cluster configuration. Both brief and verbose modes are available.
Note the actual conversion of config files to pcs command is performed by clufter, pcs merely calls clufter passing arguments to and from it.
Before Fix: [root@rh67-node1 ~]# rpm -q pcs pcs-0.9.139-9.el6.x86_64 Pcs does not have ability to export cluster configuration to a list of pcs commands. After Fix: [root@rh67-node1:~]# rpm -q pcs pcs-0.9.145-1.el6.x86_64 [root@rh67-node1:~]# pcs config Cluster Name: cluster67 Corosync Nodes: rh67-node1 rh67-node2 Pacemaker Nodes: rh67-node1 rh67-node2 Resources: Resource: dummy (class=ocf provider=heartbeat type=Dummy) Operations: start interval=0s timeout=20 (dummy-start-interval-0s) stop interval=0s timeout=20 (dummy-stop-interval-0s) monitor interval=10 timeout=20 (dummy-monitor-interval-10) Stonith Devices: Resource: xvmNode1 (class=stonith type=fence_xvm) Attributes: port=rh67-node1 pcmk_host_list=rh67-node1 Operations: monitor interval=60s (xvmNode1-monitor-interval-60s) Resource: xvmNode2 (class=stonith type=fence_xvm) Attributes: port=rh67-node2 pcmk_host_list=rh67-node2 Operations: monitor interval=60s (xvmNode2-monitor-interval-60s) Fencing Levels: Location Constraints: Resource: dummy Enabled on: rh67-node1 (score:100) (id:location-dummy-rh67-node1-100) Ordering Constraints: Colocation Constraints: Resources Defaults: No defaults set Operations Defaults: No defaults set Cluster Properties: cluster-infrastructure: cman dc-version: 1.1.11-97629de [root@rh67-node1:~]# pcs config export pcs-commands output=export [ccspcmk2pcscmd ] XSLT: NOTE: cluster infrastructure services not enabled at this point, which can be changed any time by issuing: pcs cluster enable --all [root@rh67-node1:~]# cat export pcs cluster auth rh67-node1 rh67-node2 pcs cluster setup --start --name cluster67 rh67-node1 rh67-node2 --transport udp sleep 60 pcs cluster cib tmp-cib.xml --config pcs -f tmp-cib.xml property set 'dc-version=1.1.11-97629de' pcs -f tmp-cib.xml property set 'cluster-infrastructure=cman' pcs -f tmp-cib.xml stonith create xvmNode1 fence_xvm 'port=rh67-node1' 'pcmk_host_list=rh67-node1' op monitor 'id=xvmNode1-monitor-interval-60s' 'interval=60s' 'name=monitor' pcs -f tmp-cib.xml stonith create xvmNode2 fence_xvm 'port=rh67-node2' 'pcmk_host_list=rh67-node2' op monitor 'id=xvmNode2-monitor-interval-60s' 'interval=60s' 'name=monitor' pcs -f tmp-cib.xml resource create dummy ocf:heartbeat:Dummy op start 'id=dummy-start-interval-0s' 'interval=0s' 'name=start' 'timeout=20' stop 'id=dummy-stop-interval-0s' 'interval=0s' 'name=stop' 'timeout=20' monitor 'id=dummy-monitor-interval-10' 'interval=10' 'name=monitor' 'timeout=20' pcs -f tmp-cib.xml constraint location dummy prefers rh67-node1=100 pcs cluster cib-push tmp-cib.xml --config [root@rh67-node1:~]# pcs config export pcs-commands-verbose output=export-verbose [ccspcmk2pcscmd ] XSLT: NOTE: cluster infrastructure services not enabled at this point, which can be changed any time by issuing: pcs cluster enable --all [root@rh67-node1:~]# cat export-verbose echo ':: auth cluster: cluster67' pcs cluster auth rh67-node1 rh67-node2 test $? -eq 0 && echo ':: OK' || echo ':: FAILURE' : echo ':: check cluster includes local machine: cluster67' for l in $(comm -12 <(python -m json.tool /var/lib/pcsd/pcs_users.conf | sed -n "s|^\s*\"[^\"]\+\":\s*\"\([0-9a-f-]\+\)\".*|\1|1p" | sort) <(python -m json.tool /var/lib/pcsd/tokens | sed -n "s|^\s*\"[^\"]\+\":\s*\"\([0-9a-f-]\+\)\".*|\1|1p" | sort)) @SENTINEL@; do grep -Eq "$(python -m json.tool /var/lib/pcsd/tokens | sed -n "s|^\s*\"\([^\"]\+\)\":\s*\"${l}\".*|\1|1p")" - <<<" rh67-node1 rh67-node2" && break false done || { echo "WARNING: cluster being created ought to include this very local machine" read -p "Do you want to continue [yN] (60s timeout): " -t 60 || : test "${REPLY}" = "y" || kill -INT $$ } : echo ':: new cluster: cluster67' pcs cluster setup --start --name cluster67 rh67-node1 rh67-node2 --transport udp test $? -eq 0 && echo ':: OK' || echo ':: FAILURE' : echo ':: waiting for cluster to come up: cluster67 seconds' sleep 60 test $? -eq 0 && echo ':: OK' || echo ':: FAILURE' : echo ':: get initial/working CIB: tmp-cib.xml' pcs cluster cib tmp-cib.xml --config test $? -eq 0 && echo ':: OK' || echo ':: FAILURE' : echo ':: new singleton property set: dc-version' pcs -f tmp-cib.xml property set 'dc-version=1.1.11-97629de' test $? -eq 0 && echo ':: OK' || echo ':: FAILURE' : echo ':: new singleton property set: cluster-infrastructure' pcs -f tmp-cib.xml property set 'cluster-infrastructure=cman' test $? -eq 0 && echo ':: OK' || echo ':: FAILURE' : echo ':: new stonith: xvmNode1' pcs -f tmp-cib.xml stonith create xvmNode1 fence_xvm 'port=rh67-node1' 'pcmk_host_list=rh67-node1' op monitor 'id=xvmNode1-monitor-interval-60s' 'interval=60s' 'name=monitor' test $? -eq 0 && echo ':: OK' || echo ':: FAILURE' : echo ':: new stonith: xvmNode2' pcs -f tmp-cib.xml stonith create xvmNode2 fence_xvm 'port=rh67-node2' 'pcmk_host_list=rh67-node2' op monitor 'id=xvmNode2-monitor-interval-60s' 'interval=60s' 'name=monitor' test $? -eq 0 && echo ':: OK' || echo ':: FAILURE' : echo ':: new resource: dummy' pcs -f tmp-cib.xml resource create dummy ocf:heartbeat:Dummy op start 'id=dummy-start-interval-0s' 'interval=0s' 'name=start' 'timeout=20' stop 'id=dummy-stop-interval-0s' 'interval=0s' 'name=stop' 'timeout=20' monitor 'id=dummy-monitor-interval-10' 'interval=10' 'name=monitor' 'timeout=20' test $? -eq 0 && echo ':: OK' || echo ':: FAILURE' : pcs -f tmp-cib.xml constraint location dummy prefers rh67-node1=100 echo ':: push CIB: tmp-cib.xml' pcs cluster cib-push tmp-cib.xml --config test $? -eq 0 && echo ':: OK' || echo ':: FAILURE' : [root@rh67-node1:~]# cat /root/devel/cluster.conf <?xml version="1.0"?> <cluster name="test" config_version="1"> <clusternodes> <clusternode nodeid="1" name="node1" /> <clusternode nodeid="2" name="node2" /> </clusternodes> <cman two_node="1" expected_votes="2"/> <totem consensus="200" join="100" token="5000" token_retransmits_before_loss_const="4"> <interface ttl="3"/> </totem> <logging> <logging_daemon debug="on" name="corosync" subsys="CONFDB"/> </logging> <fencedevices> <fencedevice name="foo" passwd="mysecret" testarg="testarg"/> </fencedevices> <rm> <failoverdomains/> <resources/> </rm> </cluster> [root@rh67-node1:~]# pcs config import-cman output=converted input=/root/devel/cluster.conf output-format=pcs-commands [ccspcmk2pcscmd] XSLT: NOTE: cluster infrastructure services not enabled at this point, which can be changed any time by issuing: pcs cluster enable --all [cibcompact2cib] XSLT: NOTE: no fencing is configured hence stonith is disabled; please note, however, that this is suboptimal, especially in shared storage scenarios [root@rh67-node1:~]# cat converted pcs cluster auth node1 node2 pcs cluster setup --start --name test node1 node2 --consensus 200 --join 100 \ --token 5000 sleep 60 pcs cluster cib tmp-cib.xml --config pcs -f tmp-cib.xml property set stonith-enabled false pcs cluster cib-push tmp-cib.xml --config [root@rh67-node1:~]# pcs config import-cman output=converted-verbose input=/root/devel/cluster.conf output-format=pcs-commands-verbose [ccspcmk2pcscmd] XSLT: NOTE: cluster infrastructure services not enabled at this point, which can be changed any time by issuing: pcs cluster enable --all [cibcompact2cib] XSLT: NOTE: no fencing is configured hence stonith is disabled; please note, however, that this is suboptimal, especially in shared storage scenarios [root@rh67-node1:~]# cat converted-verbose echo ':: auth cluster: test' pcs cluster auth node1 node2 test $? -eq 0 && echo ':: OK' || echo ':: FAILURE' : echo ':: check cluster includes local machine: test' for l in $(comm -12 <(python -m json.tool /var/lib/pcsd/pcs_users.conf | sed -n s|^\s*"[^"]\+":\s*"\([0-9a-f-]\+\)".*|\1|1p | sort) <(python -m json.tool /var/lib/pcsd/tokens | sed -n s|^\s*"[^"]\+":\s*"\([0-9a-f-]\+\)".*|\1|1p | sort)) @SENTINEL@; do grep -Eq '$(python -m json.tool /var/lib/pcsd/tokens | sed -n s|^s*"([^"]+)":s*"${l}".*|1|1p)' - '<<< node1 node2' && break false done || { echo 'WARNING: cluster being created ought to include this very local machine' read -p 'Do you want to continue [yN] (60s timeout): ' -t 60 || : test ${REPLY} y || kill -INT $$ } : echo ':: new cluster: test' pcs cluster setup --start --name test node1 node2 --consensus 200 --join 100 --token 5000 test $? -eq 0 && echo ':: OK' || echo ':: FAILURE' : echo ':: waiting for cluster to come up: test seconds' sleep 60 test $? -eq 0 && echo ':: OK' || echo ':: FAILURE' : echo ':: get initial/working CIB: tmp-cib.xml' pcs cluster cib tmp-cib.xml --config test $? -eq 0 && echo ':: OK' || echo ':: FAILURE' : echo ':: new singleton property set: stonith-enabled' pcs -f tmp-cib.xml property set stonith-enabled false test $? -eq 0 && echo ':: OK' || echo ':: FAILURE' : echo ':: push CIB: tmp-cib.xml' pcs cluster cib-push tmp-cib.xml --config test $? -eq 0 && echo ':: OK' || echo ':: FAILURE' : Note the actual conversion of config files to pcs commands is performed by clufter, pcs merely calls clufter passing arguments to and from it. Dependency on python-clufter should have a lower bound: - Requires: python-clufter + Requires: python-clufter >= 0.55.0 when relying on its *2pcscmd* commands. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0739.html |