RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1264795 - better integration with standalone (unbundled) clufter package for cluster configuration conversion
Summary: better integration with standalone (unbundled) clufter package for cluster co...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: pcs
Version: 6.8
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: rc
: ---
Assignee: Tomas Jelinek
QA Contact: cluster-qe@redhat.com
Milan Navratil
URL:
Whiteboard:
Depends On: 1133897 1212904 1212909 1269964 1300014
Blocks: 596327
TreeView+ depends on / blocked
 
Reported: 2015-09-21 08:53 UTC by Miroslav Lisik
Modified: 2016-06-07 15:38 UTC (History)
13 users (show)

Fixed In Version: pcs-0.9.148-3.el6
Doc Type: Release Note
Doc Text:
*pcs* now supports exporting a cluster configuration to a list of "pcs" commands With this update, the "pcs config export" command can be used to export a cluster configuration to a list of "pcs" commands. Also, the "pcs config import-cman" command, which converts a CMAN cluster configuration to a Pacemaker cluster configuration, can now output a list of "pcs" commands that can be used to create the Pacemaker cluster configuration file. As a result, the user can determine what commands can be used to set up a cluster based on its configuration files.
Clone Of: 1212904
Environment:
Last Closed: 2016-05-10 19:26:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
proposed fix (12.38 KB, patch)
2015-10-13 13:43 UTC, Tomas Jelinek
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0739 0 normal SHIPPED_LIVE pcs bug fix update 2016-05-10 22:29:32 UTC

Description Miroslav Lisik 2015-09-21 08:53:02 UTC
+++ This bug was initially created as a clone of Bug #1212904 +++

+++ This bug was initially created as a clone of Bug #1133897 +++

In the mentioned 7.1 bug, the aim was in providing initial/TechPreview
feature to allow the users of the old (not-pcs-managed) cluster stack
to migrate to the new one (pcs-friendly) while retaining the original
configuration in the conversion process as completely as possible.
This functionality is provided by clufter project code base of which
was for time pressing reasons bundled as a subpackage of pcs.

As the clufter is usable on its own (via "clufter" command provided)
and the relationship to pcs is not so tight to justify the burden of
a shared comaintenance of single SRPM, unbundling/split into a separate
component is planned (as already happened for RHEL 6.7 timeframe, where
standalone clufter has been included).
 
Hence this bug serves to track the relevant tasks on the pcs side:

1. get rid of clufter bundling (i.e., restore the state prior to RHEL 7.1
   where clufter was first introduced in 7.* line)

2. possibly teach pcs to facilitate "conversion to pcs commands"
   (starting with either old stack configuration, or actual file-level
   configuration of the new stack); see [bug 1171312 comment 13]

--- Additional comment from Jan Pokorný on 2015-04-17 13:36:26 EDT ---

Inclusion of clufter as a standalone package is tracked as [bug 1212909]
and is a (partial - point 1. from above) prerequisite for this one.

--- Additional comment from Radek Steiger on 2015-04-20 08:27:43 EDT ---

Pretty much same as bug 1171312 in 6.7 plus the conversion to pcs commands.

--- Additional comment from Jan Pokorný on 2015-05-11 15:30:21 EDT ---

Siddharth, please note that clufter does not depend on python-smbc
(as per [bug 1148838] dependency).

--- Additional comment from Tomas Jelinek on 2015-06-04 10:50:30 EDT ---

Clufter has been moved to a standalone package.
Conversion of a cluster configuration to a sequence of pcs commands is not ready yet on clufter's side.

--- Additional comment from Jan Pokorný on 2015-06-05 05:33:10 EDT ---

re [comment 0]:
> Hence this bug serves to track the relevant tasks on the pcs side:

TODO checklist revmap:
1. get rid of clufter bundling -- DONE
2. possibly teach pcs to facilitate "conversion to pcs commands"
3. propagate information about whether to use colorization of
   clufter library output when run from within pcs
   - in that case, clufter can have incorrect information whether it
     is outputting to terminal (can be run in pseudo-terminal) to
     that piece of information detected directly by pcs should be
     passed down the call-chain

Comment 1 Miroslav Lisik 2015-09-21 09:05:55 UTC
Items 2 and 3 from the checklist did not make it to 7.2.

Comment 2 Tomas Jelinek 2015-10-13 13:43:00 UTC
Created attachment 1082443 [details]
proposed fix

'pcs config import cman' command now supports two new output formats: pcs-commands and pcs-commands-verbose, which produce list of pcs commands and save it to specified file.

Also a new command, 'pcs config export', has been implemented, which produces a list of pcs commands from the current running cluster configuration. Both brief and verbose modes are available.

Note the actual conversion of config files to pcs command is performed by clufter, pcs merely calls clufter passing arguments to and from it.

Comment 3 Tomas Jelinek 2015-11-04 11:58:03 UTC
Before Fix:
[root@rh67-node1 ~]# rpm -q pcs
pcs-0.9.139-9.el6.x86_64
Pcs does not have ability to export cluster configuration to a list of pcs commands.



After Fix:
[root@rh67-node1:~]# rpm -q pcs
pcs-0.9.145-1.el6.x86_64


[root@rh67-node1:~]# pcs config
Cluster Name: cluster67
Corosync Nodes:
 rh67-node1 rh67-node2 
Pacemaker Nodes:
 rh67-node1 rh67-node2 

Resources: 
 Resource: dummy (class=ocf provider=heartbeat type=Dummy)
  Operations: start interval=0s timeout=20 (dummy-start-interval-0s)
              stop interval=0s timeout=20 (dummy-stop-interval-0s)
              monitor interval=10 timeout=20 (dummy-monitor-interval-10)

Stonith Devices: 
 Resource: xvmNode1 (class=stonith type=fence_xvm)
  Attributes: port=rh67-node1 pcmk_host_list=rh67-node1 
  Operations: monitor interval=60s (xvmNode1-monitor-interval-60s)
 Resource: xvmNode2 (class=stonith type=fence_xvm)
  Attributes: port=rh67-node2 pcmk_host_list=rh67-node2 
  Operations: monitor interval=60s (xvmNode2-monitor-interval-60s)
Fencing Levels: 

Location Constraints:
  Resource: dummy
    Enabled on: rh67-node1 (score:100) (id:location-dummy-rh67-node1-100)
Ordering Constraints:
Colocation Constraints:

Resources Defaults:
 No defaults set
Operations Defaults:
 No defaults set

Cluster Properties:
 cluster-infrastructure: cman
 dc-version: 1.1.11-97629de


[root@rh67-node1:~]# pcs config export pcs-commands output=export
[ccspcmk2pcscmd     ] XSLT: NOTE: cluster infrastructure services not enabled at this point, which can be changed any time by issuing: pcs cluster enable --all
[root@rh67-node1:~]# cat export
pcs cluster auth rh67-node1 rh67-node2
pcs cluster setup --start --name cluster67 rh67-node1 rh67-node2 --transport udp
sleep 60
pcs cluster cib tmp-cib.xml --config
pcs -f tmp-cib.xml property set 'dc-version=1.1.11-97629de'
pcs -f tmp-cib.xml property set 'cluster-infrastructure=cman'
pcs -f tmp-cib.xml stonith create xvmNode1 fence_xvm 'port=rh67-node1' 'pcmk_host_list=rh67-node1' op monitor 'id=xvmNode1-monitor-interval-60s' 'interval=60s' 'name=monitor'
pcs -f tmp-cib.xml stonith create xvmNode2 fence_xvm 'port=rh67-node2' 'pcmk_host_list=rh67-node2' op monitor 'id=xvmNode2-monitor-interval-60s' 'interval=60s' 'name=monitor'
pcs -f tmp-cib.xml resource create dummy ocf:heartbeat:Dummy op start 'id=dummy-start-interval-0s' 'interval=0s' 'name=start' 'timeout=20' stop 'id=dummy-stop-interval-0s' 'interval=0s' 'name=stop' 'timeout=20' monitor 'id=dummy-monitor-interval-10' 'interval=10' 'name=monitor' 'timeout=20'
pcs -f tmp-cib.xml constraint location dummy prefers rh67-node1=100
pcs cluster cib-push tmp-cib.xml --config


[root@rh67-node1:~]# pcs config export pcs-commands-verbose output=export-verbose
[ccspcmk2pcscmd     ] XSLT: NOTE: cluster infrastructure services not enabled at this point, which can be changed any time by issuing: pcs cluster enable --all
[root@rh67-node1:~]# cat export-verbose
echo ':: auth cluster: cluster67'
pcs cluster auth rh67-node1 rh67-node2
test $? -eq 0 && echo ':: OK' || echo ':: FAILURE'
:
echo ':: check cluster includes local machine: cluster67'
for l in $(comm -12 <(python -m json.tool /var/lib/pcsd/pcs_users.conf | sed -n "s|^\s*\"[^\"]\+\":\s*\"\([0-9a-f-]\+\)\".*|\1|1p" | sort) <(python -m json.tool /var/lib/pcsd/tokens | sed -n "s|^\s*\"[^\"]\+\":\s*\"\([0-9a-f-]\+\)\".*|\1|1p" | sort)) @SENTINEL@; do
grep -Eq "$(python -m json.tool /var/lib/pcsd/tokens | sed -n "s|^\s*\"\([^\"]\+\)\":\s*\"${l}\".*|\1|1p")" - <<<" rh67-node1 rh67-node2" && break
false
done || {
echo "WARNING: cluster being created ought to include this very local machine"
read -p "Do you want to continue [yN] (60s timeout): " -t 60 || :
test "${REPLY}" = "y" || kill -INT $$
}
:
echo ':: new cluster: cluster67'
pcs cluster setup --start --name cluster67 rh67-node1 rh67-node2 --transport udp
test $? -eq 0 && echo ':: OK' || echo ':: FAILURE'
:
echo ':: waiting for cluster to come up: cluster67 seconds'
sleep 60
test $? -eq 0 && echo ':: OK' || echo ':: FAILURE'
:
echo ':: get initial/working CIB: tmp-cib.xml'
pcs cluster cib tmp-cib.xml --config
test $? -eq 0 && echo ':: OK' || echo ':: FAILURE'
:
echo ':: new singleton property set: dc-version'
pcs -f tmp-cib.xml property set 'dc-version=1.1.11-97629de'
test $? -eq 0 && echo ':: OK' || echo ':: FAILURE'
:
echo ':: new singleton property set: cluster-infrastructure'
pcs -f tmp-cib.xml property set 'cluster-infrastructure=cman'
test $? -eq 0 && echo ':: OK' || echo ':: FAILURE'
:
echo ':: new stonith: xvmNode1'
pcs -f tmp-cib.xml stonith create xvmNode1 fence_xvm 'port=rh67-node1' 'pcmk_host_list=rh67-node1' op monitor 'id=xvmNode1-monitor-interval-60s' 'interval=60s' 'name=monitor'
test $? -eq 0 && echo ':: OK' || echo ':: FAILURE'
:
echo ':: new stonith: xvmNode2'
pcs -f tmp-cib.xml stonith create xvmNode2 fence_xvm 'port=rh67-node2' 'pcmk_host_list=rh67-node2' op monitor 'id=xvmNode2-monitor-interval-60s' 'interval=60s' 'name=monitor'
test $? -eq 0 && echo ':: OK' || echo ':: FAILURE'
:
echo ':: new resource: dummy'
pcs -f tmp-cib.xml resource create dummy ocf:heartbeat:Dummy op start 'id=dummy-start-interval-0s' 'interval=0s' 'name=start' 'timeout=20' stop 'id=dummy-stop-interval-0s' 'interval=0s' 'name=stop' 'timeout=20' monitor 'id=dummy-monitor-interval-10' 'interval=10' 'name=monitor' 'timeout=20'
test $? -eq 0 && echo ':: OK' || echo ':: FAILURE'
:
pcs -f tmp-cib.xml constraint location dummy prefers rh67-node1=100
echo ':: push CIB: tmp-cib.xml'
pcs cluster cib-push tmp-cib.xml --config
test $? -eq 0 && echo ':: OK' || echo ':: FAILURE'
:


[root@rh67-node1:~]# cat /root/devel/cluster.conf
<?xml version="1.0"?>
<cluster name="test" config_version="1">
  <clusternodes>
    <clusternode nodeid="1" name="node1" />
    <clusternode nodeid="2" name="node2" />
  </clusternodes>
  <cman two_node="1" expected_votes="2"/>
  <totem consensus="200" join="100" token="5000" token_retransmits_before_loss_const="4">
    <interface ttl="3"/>
  </totem>
  <logging>
    <logging_daemon debug="on" name="corosync" subsys="CONFDB"/>
  </logging>
  <fencedevices>
    <fencedevice name="foo" passwd="mysecret" testarg="testarg"/>
  </fencedevices>
  <rm>
    <failoverdomains/>
    <resources/>
  </rm>
</cluster>


[root@rh67-node1:~]# pcs config import-cman output=converted input=/root/devel/cluster.conf output-format=pcs-commands
[ccspcmk2pcscmd] XSLT: NOTE: cluster infrastructure services not enabled at this point, which can be changed any time by issuing: pcs cluster enable --all
[cibcompact2cib] XSLT: NOTE: no fencing is configured hence stonith is disabled; please note, however, that this is suboptimal, especially in shared storage scenarios
[root@rh67-node1:~]# cat converted
pcs cluster auth node1 node2
pcs cluster setup --start --name test node1 node2 --consensus 200 --join 100 \
  --token 5000
sleep 60
pcs cluster cib tmp-cib.xml --config
pcs -f tmp-cib.xml property set stonith-enabled false
pcs cluster cib-push tmp-cib.xml --config


[root@rh67-node1:~]# pcs config import-cman output=converted-verbose input=/root/devel/cluster.conf output-format=pcs-commands-verbose
[ccspcmk2pcscmd] XSLT: NOTE: cluster infrastructure services not enabled at this point, which can be changed any time by issuing: pcs cluster enable --all
[cibcompact2cib] XSLT: NOTE: no fencing is configured hence stonith is disabled; please note, however, that this is suboptimal, especially in shared storage scenarios
[root@rh67-node1:~]# cat converted-verbose
echo ':: auth cluster: test'
pcs cluster auth node1 node2
test $? -eq 0 && echo ':: OK' || echo ':: FAILURE'
:
echo ':: check cluster includes local machine: test'
for l in $(comm -12 <(python -m json.tool /var/lib/pcsd/pcs_users.conf | sed -n s|^\s*"[^"]\+":\s*"\([0-9a-f-]\+\)".*|\1|1p | sort) <(python -m json.tool /var/lib/pcsd/tokens | sed -n s|^\s*"[^"]\+":\s*"\([0-9a-f-]\+\)".*|\1|1p | sort)) @SENTINEL@; do
grep -Eq '$(python -m json.tool /var/lib/pcsd/tokens | sed -n s|^s*"([^"]+)":s*"${l}".*|1|1p)' - '<<< node1 node2' && break
false
done || {
echo 'WARNING: cluster being created ought to include this very local machine'
read -p 'Do you want to continue [yN] (60s timeout): ' -t 60 || :
test ${REPLY}   y || kill -INT $$
}
:
echo ':: new cluster: test'
pcs cluster setup --start --name test node1 node2 --consensus 200 --join 100 --token 5000
test $? -eq 0 && echo ':: OK' || echo ':: FAILURE'
:
echo ':: waiting for cluster to come up: test seconds'
sleep 60
test $? -eq 0 && echo ':: OK' || echo ':: FAILURE'
:
echo ':: get initial/working CIB: tmp-cib.xml'
pcs cluster cib tmp-cib.xml --config
test $? -eq 0 && echo ':: OK' || echo ':: FAILURE'
:
echo ':: new singleton property set: stonith-enabled'
pcs -f tmp-cib.xml property set stonith-enabled false
test $? -eq 0 && echo ':: OK' || echo ':: FAILURE'
:
echo ':: push CIB: tmp-cib.xml'
pcs cluster cib-push tmp-cib.xml --config
test $? -eq 0 && echo ':: OK' || echo ':: FAILURE'
:


Note the actual conversion of config files to pcs commands is performed by clufter, pcs merely calls clufter passing arguments to and from it.

Comment 6 Jan Pokorný [poki] 2016-01-19 19:20:48 UTC
Dependency on python-clufter should have a lower bound:

- Requires: python-clufter
+ Requires: python-clufter >= 0.55.0

when relying on its *2pcscmd* commands.

Comment 12 errata-xmlrpc 2016-05-10 19:26:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0739.html


Note You need to log in before you can comment on or make changes to this bug.