Bug 1466872 - [DOCS] [RFE] Document manual process for installing/upgrading metrics stack [NEEDINFO]
[DOCS] [RFE] Document manual process for installing/upgrading metrics stack
Status: CLOSED WONTFIX
Product: OpenShift Container Platform
Classification: Red Hat
Component: Documentation (Show other bugs)
3.5.0
Unspecified Unspecified
unspecified Severity low
: ---
: ---
Assigned To: Brandi
Junqi Zhao
Vikram Goyal
3.10-release-plan
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-06-30 11:30 EDT by Sergi Jimenez Romero
Modified: 2018-06-15 18:14 EDT (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-06-05 13:27:58 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
jcantril: needinfo? (jsanda)


Attachments (Terms of Use)

  None (edit)
Description Sergi Jimenez Romero 2017-06-30 11:30:44 EDT
> 3. What is the nature and description of the request?  

On the current documents, the process for installing or upgrading metrics implies using ansible playbooks, turning the process into a black box.

> 4. Why does the customer need this? (List the business requirements here)  

For both, better understanding of what the playbooks do, specially when issues are found and also, to have the alternative to do the installation/upgrade manually.

> 5. How would the customer like to achieve this? (List the functional requirements here)  

Add a subsection to the current documentation.

> 6. For each functional requirement listed, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.  

Results of performing the process manually or using the playbooks should be the same.


> 10. List any affected packages or components.

Documentation
Comment 3 Matt Wringe 2017-07-14 10:49:58 EDT
A high level perspective of what happens during an update process is as follows:

- the running pods are brought down

- we update the new object definitions for those pods, services, roles, etc

- the pods are then brought back up again

Heapster is stateless and it doesn't need to go through a migration step. During this process.

Cassandra is stateful and it may need to go through a migration process when updating to a newer version. The Cassandra pod will automatically detect and perform this migration if necessary. Cassandra's migration process happens in the background and all operations are available while this is happening.

Hawkular Metrics itself doesn't have any persistent storage, but it does depend on how things are configured in Cassandra. When the Hawkular Metrics pod is started, it will detect if its schema needs to be updated in Cassandra or not and will perform this operation if necessary.

A user should not have to manually do any migration steps.

Note that currently we bring everything down and then back up again. Since the metrics gathering is not happening during the update, you may encounter a gap in the metrics graphs.

Note You need to log in before you can comment on or make changes to this bug.