Bug 1024492 - pcs should handle full cluster config backup/restore
pcs should handle full cluster config backup/restore
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pcs (Show other bugs)
Unspecified Unspecified
medium Severity medium
: rc
: ---
Assigned To: Tomas Jelinek
Cluster QE
Depends On:
Blocks: 1111381 1113520 1281408 1129859 1138242
  Show dependency treegraph
Reported: 2013-10-29 15:03 EDT by Fabio Massimo Di Nitto
Modified: 2016-10-26 11:07 EDT (History)
6 users (show)

See Also:
Fixed In Version: pcs-0.9.126-1.el7
Doc Type: Enhancement
Doc Text:
Feature: Provide an easy way to backup and restore cluster configuration across the cluster. Reason: Backup and restore of the cluster configuration files on all nodes is a non-trivial task. Result: User is able to backup and restore the cluster configuration files easily.
Story Points: ---
Clone Of:
: 1129859 1138242 (view as bug list)
Last Closed: 2015-03-05 04:18:26 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
proposed fix (23.81 KB, patch)
2014-09-03 08:46 EDT, Tomas Jelinek
no flags Details | Diff

  None (edit)
Description Fabio Massimo Di Nitto 2013-10-29 15:03:33 EDT
in several different cases, it should be possible to backup and restore a full cluster configuration without the need to re-input everything manually.

pcs cluster config

does indeed show the configuration but it's not re-usable format such as copy/paste or re-import.

pcs cluster backup <file>

should generate a tarball with corosync.conf from all nodes and CIB and a version.txt (to contain the version format of this tarball file, in case we need to expand/extend what's contained in the tarball)

pcs cluster restore <file>

should work only if cluster is not running.
if we are restoring from a version < than the one we support then warn
if we restore from a version > than we support then fail
Comment 5 Tomas Jelinek 2014-09-03 08:46:04 EDT
Created attachment 934080 [details]
proposed fix


1. check cluster status
pcs status
2. backup the config files
pcs config backup mybackup.tar.bz2
3. destroy the cluster
pcs cluster destroy --all
4. restore the config files
pcs config restore mybackup.tar.bz2
5. start the cluster
pcs cluster start --all
6. check cluster status and verify it is the same as it was before cluster destruction
pcs status
Comment 7 Tomas Jelinek 2014-09-11 08:59:46 EDT
Before Fix:
[root@rh70-node1 ~]# rpm -q pcs
[root@rh70-node1 ~]# pcs config backup
{prints cluster configuration}
[root@rh70-node1 ~]# pcs config restore
{prints cluster configuration}

After Fix:
[root@rh70-node1 ~]# rpm -q pcs
[root@rh70-node1 ~]# pcs status nodes both
Corosync Nodes:
 Online: rh70-node1 rh70-node2 
Pacemaker Nodes:
 Online: rh70-node1 rh70-node2 
[root@rh70-node1 ~]# pcs status resources
 dummy  (ocf::heartbeat:Dummy): Started 
[root@rh70-node1 ~]# pcs config backup test.tar.bz2
[root@rh70-node1 ~]# pcs cluster destroy --all
rh70-node1: Successfully destroyed cluster
rh70-node2: Successfully destroyed cluster
[root@rh70-node1 ~]# pcs cluster start --all
Error: Unable to read /etc/corosync/corosync.conf: No such file or directory
[root@rh70-node1 ~]# pcs config restore test.tar.bz2
rh70-node1: Succeeded
rh70-node2: Succeeded
[root@rh70-node1 ~]# pcs cluster start --all
rh70-node1: Starting Cluster...
rh70-node2: Starting Cluster...
[root@rh70-node1 ~]# pcs status nodes both
Corosync Nodes:
 Online: rh70-node1 rh70-node2 
Pacemaker Nodes:
 Online: rh70-node1 rh70-node2 
[root@rh70-node1 ~]# pcs status resources
 dummy  (ocf::heartbeat:Dummy): Started
Comment 11 errata-xmlrpc 2015-03-05 04:18:26 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.