Bug 1721448 - [Test-Only] Test datacenter upgrade from 4.3.7 + RHGS 3.4 to 4.3.8 + RHGS 3.5
Summary: [Test-Only] Test datacenter upgrade from 4.3.7 + RHGS 3.4 to 4.3.8 + RHGS 3.5
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: unspecified
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: RHHI-V 1.7
Assignee: SATHEESARAN
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1718217
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-18 10:17 UTC by SATHEESARAN
Modified: 2020-02-13 15:57 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of: 1718217
Environment:
Last Closed: 2020-02-13 15:57:20 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:0508 0 None None None 2020-02-13 15:57:34 UTC

Description SATHEESARAN 2019-06-18 10:17:36 UTC
RHGS-3.4.z is based on glusterfs-3.12.2
RHGS-3.5 is based on glusterfs-6.0

We are currently targeting 4.3.5 to 2019-AUG-06
while RHGS is planned for 2019-Sep-12

So RHV-H 4.3.5 will be released downstream still on RHGS 3.4 while upstream we want to swithc to glusterfs 6 earlier in order to get community feedback.
In the meanwhile we need to start testing this downstream assuming we'll have either 4.3.6 around Sep 12 or RHGS 3.5 based on RHV 4.3.5

Comment 4 Gobinda Das 2019-11-27 10:32:39 UTC
As this is TESTONLY, so moving this to ON_QA

Comment 5 SATHEESARAN 2020-02-01 05:21:40 UTC
Verified the upgrade from RHV 4.3.7 to RHV 4.3.8.

Steps involved are:
1. RHHI-V Deployment ( self-hosted-engine deployment ) with 3 nodes.
2. 10 RHEL 7.7 VMs are created and all these VMs are continuously running I/O with kernel untar workload
Note: kernel untar workload, downloads the tarball of kernel, untars and computes the sha256sum of all the extracted files.

3. Enabled the local repo that contained the RHVH 4.3.8 redhat-virtualization-host-image-update
4. Enabled global maintenance for Hosted Engine VM.
5. RHV Manager 4.3.7 is updated to RHV Manager 4.3.8, and also the all software packages update is done and rebooted.
6. HE VM is started and move out of global maintenance
7. Once RHV Manager UI is up, logged in to it and upgraded one after the other from UI
8. Once all the hosts are upgraded, edit the cluster 'Default' and go to 'General' -> 'Compatibility version' and update it from '4.2' to '4.3'
Existing VMs needs to powered-off and restarted post updating the compatibility version

Known_issues
1. Sometimes, in the 2 network configuration, the gluster network never came up and need to set 'BOOTPROTO=dhcp'
and bring up the network.
2. Because of 1, gluster bricks are not coming up, leading to pending self-heal
One had to bring up the network and heal starts
3. After all the healing is completed, it takes sometime ( not more than 5 mins ) to reflect the status of heal in RHV Manager UI

Comment 7 errata-xmlrpc 2020-02-13 15:57:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0508


Note You need to log in before you can comment on or make changes to this bug.