Bug 2162306

Summary: test_upgrade_mcg_io is falling after upgrading ODF 4.11.4 to 4.12.0 on IBM-Z
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Sujoy Batabyal <sbatabya>
Component: Multi-Cloud Object GatewayAssignee: Nimrod Becker <nbecker>
Status: CLOSED INSUFFICIENT_DATA QA Contact: krishnaram Karthick <kramdoss>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.12CC: muagarwa, nbecker, nobody, ocs-bugs, odf-bz-bot, sheggodu
Target Milestone: ---Flags: nbecker: needinfo? (sbatabya)
sheggodu: needinfo? (nobody)
Target Release: ---   
Hardware: s390x   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-04-04 08:54:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
tests/ecosystem/upgrade/test_noobaa.py::test_upgrade_mcg_io is falling after upgrading ODF 4.11.4 to 4.12.0 on IBM-Z none

Description Sujoy Batabyal 2023-01-19 10:02:30 UTC
Created attachment 1939116 [details]
tests/ecosystem/upgrade/test_noobaa.py::test_upgrade_mcg_io is falling after upgrading ODF 4.11.4 to 4.12.0 on IBM-Z

Description of problem (please be detailed as possible and provide log
snippests): tests/ecosystem/upgrade/test_noobaa.py::test_upgrade_mcg_io falling on IBM Z due to following error.

_____________________________ test_upgrade_mcg_io ______________________________

mcg_workload_job = <ocs_ci.ocs.resources.ocs.OCS object at 0x3ff87183e80>

    @post_upgrade
    @skipif_managed_service
    @pytest.mark.polarion_id("OCS-2207")
    @bugzilla("1874243")
    @red_squad
    def test_upgrade_mcg_io(mcg_workload_job):
        """
        Confirm that there is MCG workload job running after upgrade.
        """
>       assert wait_for_active_pods(
            mcg_workload_job, 1
        ), f"Job {mcg_workload_job.name} doesn't have any running pod"
E       AssertionError: Job mcg-workload doesn't have any running pod
E       assert False
E        +  where False = wait_for_active_pods(<ocs_ci.ocs.resources.ocs.OCS object at 0x3ff87183e80>, 1)

tests/ecosystem/upgrade/test_noobaa.py:181: AssertionError

Version of all relevant components (if applicable): 
Upgrade from OCS 4.11 to OCS 4.12


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
4

Can this issue reproducible?
Yes

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.
2.
3.


Actual results:
 ocs_ci.ocs.exceptions.CommandFailed: Error during execution of command: oc -n openshift-storage rsh rook-ceph-tools-5d548df797-8kllf ceph health detail.
  Error is error: Internal error occurred: error executing command in container: container is not created or running

Expected results:
Confirm that there is MCG workload job running after upgrade. 

Additional info: