Bug 1550364 - [Doc RFE] Document information regarding how to upgrade from RHHI 1.x to RHHI 2.0.
Summary: [Doc RFE] Document information regarding how to upgrade from RHHI 1.x to RHHI...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: doc-Maintaining_RHHI
Version: rhhiv-1.5
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHHI-V 1.5
Assignee: Laura Bailey
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: 1531908
TreeView+ depends on / blocked
 
Reported: 2018-03-01 05:31 UTC by Anjana Suparna Sriram
Modified: 2019-02-15 10:11 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-02-15 10:11:05 UTC
Embargoed:


Attachments (Terms of Use)

Description Anjana Suparna Sriram 2018-03-01 05:31:06 UTC
Use case overview: Provide customers with task information regarding how to upgrade from RHHI 1.x to RHHI 2.0.

Comment 16 SATHEESARAN 2018-07-27 16:45:59 UTC
1. Please remove the content below updating the RHV hosts under chapter-8

Remove the following content.

"Normally, upgrading Red Hat Virtualization includes updates to Red Hat Gluster Storage packages. However, for Red Hat Hyperconverged Infrastructure for Virtualization version 2.0.0, there is no update to Red Hat Gluster Storage. Red Hat Gluster Storage updates are expected to coincide with the release of Red Hat Hyperconverged Infrastructure for Virtualization version 2.0.1."

2. Hosted Engine VM should be put in to 'Global' Maintenance
Step-1 under section 8.4.1 - "Disable high-availability for all hosted engine nodes.", needs to be changed.

Log in to cockpit UI ( https://<host>:9090 ), go to 'Virtualization' -> 'HostedEngine' , under 'Status of this host' -> select 'Put this cluster into global maintenance'

3. Point-3 under section 8.4.1. there is a step to update the packages using 'yum update'. If there are kernel update, we should insist the users to reboot the hosted engine VM with the following steps:

    a. Reboot the hosted engine VM
       # reboot
    b. Start the hosted engine VM from one of the host
       # hosted-engine --vm-start
    c. Wait for few minutes and check for hosted engine VM status
       # hosted-engine --vm-status

4. Change Point-4 under the sub-topic - "Re-enable high-availability agents on all self-hosted engine nodes" -  moving the hosted-engine out of global maintenance. This step should be done from cockpit. Same actions as step-1

Virtualization ->  HostedEngine -> 'Remove this host from maintenance'

5. Section 8.4.2 - "Disable high-availability for all hosted engine nodes."
Point-1 is not required. You need not do that.
The second point - "Upgrade one virt host at a time" becomes the first step.
Also point 3 is not required. Instead add a step to restart glusterd after the host comes up, after upgrade. This should be done on each host.

Comment 18 SATHEESARAN 2018-07-30 06:58:28 UTC
Verified the content.


Note You need to log in before you can comment on or make changes to this bug.