Bug 1465377 - [RFE] Recovery of cns from Openshift backup and restore option
[RFE] Recovery of cns from Openshift backup and restore option
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: CNS-deployment (Show other bugs)
Unspecified Unspecified
high Severity high
: ---
: ---
Assigned To: Ramakrishna Reddy Yekulla
Depends On:
  Show dependency treegraph
Reported: 2017-06-27 06:36 EDT by Jaspreet Kaur
Modified: 2017-11-15 07:18 EST (History)
14 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Jaspreet Kaur 2017-06-27 06:36:07 EDT
Description of problem: The procedure consists of the following steps:

0. Stop ocp services (api, controllers, node)
1. Restore etcd data
2. Restore ocp master config
3. Redeploy certificates
5. Start up ocp services (api, controllers, node)
6. delete nodes to get current machine id (cloudprovider)
7. relabel nodes (master -> unschedulable, gluster -> storagenode=glusterfs)
8. reboot nodes

Note: old directories of gluster and service accounts are in place. After this point deployments, builds, scaling, dynamically provision cinder storage and the sdn works.

Gluster is the only part which does not come back as intended i.e gluster pod are in running state but heketi pod is continuously going in crash loop state

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:

Actual results: It is not in running state.

Expected results: It should come up after restoration.

Additional info:

Note You need to log in before you can comment on or make changes to this bug.