Bug 1898817 - cluster upgrade from RHV Admin portal fails
Summary: cluster upgrade from RHV Admin portal fails
Keywords:
Status: CLOSED DUPLICATE of bug 1958116
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhiv-1.8
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Gobinda Das
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-11-18 07:11 UTC by SATHEESARAN
Modified: 2021-07-09 06:56 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-07-09 06:56:35 UTC
Embargoed:


Attachments (Terms of Use)
ansible-logs-from-engine (155.74 KB, application/octet-stream)
2021-05-03 05:20 UTC, SATHEESARAN
no flags Details

Description SATHEESARAN 2020-11-18 07:11:56 UTC
Description of problem:
-------------------------
When initiating the cluster upgrade from RHV Admin portal, the ansible runner service starts with the upgrade procedure, it runs the upgrade on host1, then post upgrading, the process fails as it fails to find /var/imgbased/.image_updated file

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHV 4.4.3 ( 4.4.3.12-0.1.el8ev )
RHVH 4.4.2

How reproducible:
-----------------
Tried only once

Steps to Reproduce:
-------------------
1. Create RHHI-V 1.8 Update1 deployment with RHV 4.4.2
2. Upgrade the engine (RHV Manager) to RHV 4.4.3
3. Add the repo for 'redhat-virtualization-host-image-update' for RHV 4.4.3
4. From RHV 4.4.3 Admin portal, upgrade the cluster

Actual results:
----------------
Upgrade of RHVH node failed as image_updated file is missing


Expected results:
-----------------
Upgrade of RHVH node should be successful, with /var/imgbased/.image_update file


Additional info:

Comment 1 SATHEESARAN 2021-05-03 05:09:06 UTC
Upgrading all the hosts from Administration portal using 'Cluster Upgrade' feature fails.

<snip>
2021-05-03 04:59:04 UTC - TASK [ovirt.ovirt.cluster_upgrade : Upgrade host] ******************************
2021-05-03 05:04:11 UTC - An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Exception: Error message: ['Failed to upgrade Host rhsqa-grafton10-
nic2.lab.eng.blr.redhat.com (User: admin@internal-authz).', 'Invalid status on Data Center Default. Setting status to Non Responsive.']
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Error message: ['Failed to upgrade Host rhsqa-grafton10-nic2.lab.eng.blr.redhat.com (User: admin@internal-authz).', 'Invalid status 
on Data Center Default. Setting status to Non Responsive.']"}
2021-05-03 05:04:11 UTC - {
  "status" : "OK",
  "msg" : "",
  "data" : {
    "uuid" : "62b34513-1579-4ad7-b71c-a373a201b329",
    "counter" : 118,
    "stdout" : "An exception occurred during task execution. To see the full traceback, use -vvv. The error was: Exception: Error message: ['Failed to upgrade Host rhsqa-grafton10-nic2.lab.e
ng.blr.redhat.com (User: admin@internal-authz).', 'Invalid status on Data Center Default. Setting status to Non Responsive.']\r\nfatal: [localhost]: FAILED! => {\"changed\": false, \"msg\": 
\"Error message: ['Failed to upgrade Host rhsqa-grafton10-nic2.lab.eng.blr.redhat.com (User: admin@internal-authz).', 'Invalid status on Data Center Default. Setting status to Non Responsive
.']\"}",
    "start_line" : 111,
    "end_line" : 113,
    "runner_ident" : "3bf78c6c-abcc-11eb-92cd-004855204901",
    "event" : "runner_on_failed",
    "pid" : 15107,
    "created" : "2021-05-03T05:04:08.980856",
    "parent_uuid" : "00485520-4901-a264-c08f-000000000140",
    "event_data" : {
      "playbook" : "ovirt-cluster-upgrade.yml",
      "playbook_uuid" : "22b34434-df39-4e76-830f-d8f1b9fd190a",
      "play" : "oVirt cluster upgrade wizard target",
</snip>

Comment 2 SATHEESARAN 2021-05-03 05:20:46 UTC
Created attachment 1778829 [details]
ansible-logs-from-engine

Comment 6 Gobinda Das 2021-07-09 06:56:35 UTC
This issue is fixed in https://bugzilla.redhat.com/show_bug.cgi?id=1958116 ,  so closing this as duplicate.

*** This bug has been marked as a duplicate of bug 1958116 ***


Note You need to log in before you can comment on or make changes to this bug.