Login
Log in using an SSO provider:
Fedora Account System
Red Hat Associate
Red Hat Customer
Login using a Red Hat Bugzilla account
Forgot Password
Create an Account
Red Hat Bugzilla – Attachment 1944664 Details for
Bug 2159757
After shutting down a worker node, some of the rook ceph pods are stuck in a Terminating state
Home
New
Search
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh92 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
[?]
This site requires JavaScript to be enabled to function correctly, please enable it.
Attaching log file for the testcase
test-noobaa-sts-node-failure-noobaa-db-pg-true.log (text/plain), 126.22 KB, created by
Aaruni Aggarwal
on 2023-02-17 05:58:34 UTC
(
hide
)
Description:
Attaching log file for the testcase
Filename:
MIME Type:
Creator:
Aaruni Aggarwal
Created:
2023-02-17 05:58:34 UTC
Size:
126.22 KB
patch
obsolete
>nohup: ignoring input >Failed to fetch auth.yaml from ocs-ci-data >============================= test session starts ============================== >platform linux -- Python 3.8.13, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 >rootdir: /root/ocs-upi-kvm/src/ocs-ci, configfile: pytest.ini >plugins: flaky-3.7.0, repeat-0.9.1, ordering-0.6, metadata-1.11.0, logger-0.5.1, marker-bugzilla-0.9.4, html-3.1.1 >collected 1 item > >tests/manage/mcg/test_host_node_failure.py::TestNoobaaSTSHostNodeFailure::test_noobaa_sts_host_node_failure[noobaa-db-pg-True] >-------------------------------- live log setup -------------------------------- >23:50:29 - MainThread - ocs_ci.utility.utils - INFO - testrun_name: OCS4-12-Downstream-OCP4-12-POWERVS-UPI-1AZ-RHCOS-LSO-3M-3W-tier4b >23:50:29 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc version -o json >23:50:29 - MainThread - ocs_ci.utility.utils - INFO - Retrieving the authentication config dictionary >23:50:29 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get pods -o name >23:50:29 - MainThread - ocs_ci.ocs.utils - INFO - pod name match found appending rook-ceph-tools-65ffd8b86d-pn8wk >23:50:29 - MainThread - ocs_ci.ocs.utils - INFO - Ceph toolbox already exists, skipping >23:50:29 - MainThread - tests.conftest - INFO - All logs located at /root/logs-ocs-ci/4.12/ocs-ci-logs-1676609418 >23:50:29 - MainThread - tests.conftest - INFO - Skipping client download >23:50:29 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get csv -n openshift-storage -o yaml >23:50:31 - MainThread - ocs_ci.ocs.version - INFO - collecting ocp version >23:50:31 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get clusterversion version -o yaml >23:50:31 - MainThread - ocs_ci.ocs.version - INFO - collecting ocs version >23:50:31 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get namespace -o yaml >23:50:31 - MainThread - ocs_ci.ocs.version - INFO - found storage namespaces ['openshift-cluster-storage-operator', 'openshift-kube-storage-version-migrator', 'openshift-kube-storage-version-migrator-operator', 'openshift-local-storage', 'openshift-storage'] >23:50:31 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-cluster-storage-operator get pod -n openshift-cluster-storage-operator -o yaml >23:50:31 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-kube-storage-version-migrator get pod -n openshift-kube-storage-version-migrator -o yaml >23:50:32 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-kube-storage-version-migrator-operator get pod -n openshift-kube-storage-version-migrator-operator -o yaml >23:50:32 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-local-storage get pod -n openshift-local-storage -o yaml >23:50:32 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get pod -n openshift-storage -o yaml >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - ClusterVersion .spec.channel: stable-4.12 >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - ClusterVersion .status.desired.version: 4.12.2 >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - ClusterVersion .status.desired.image: quay.io/openshift-release-dev/ocp-release@sha256:78b979f9f93d636d2a1650ae0f38e758c5bb2bccd9e0485cd4adc9a7acc5a38a >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - storage namespace openshift-cluster-storage-operator >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85b7fba9e21150b719c00c9db7f41bf316de7cf72bb12658222f0c64d5e4e0cc {'quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85b7fba9e21150b719c00c9db7f41bf316de7cf72bb12658222f0c64d5e4e0cc'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af59b0b88f5166381bae428823e50d1b9c35aa3bababf2f1cb1ca1095198d207 {'quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af59b0b88f5166381bae428823e50d1b9c35aa3bababf2f1cb1ca1095198d207'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a721b13a7637053bfab6b662733e5c96522726f5db682158bc2ab205831eecb6 {'quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a721b13a7637053bfab6b662733e5c96522726f5db682158bc2ab205831eecb6'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:452e34e505ea44cb4a72f11fae1429fa66dbf1814354268de1566d3d767f8557 {'quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:452e34e505ea44cb4a72f11fae1429fa66dbf1814354268de1566d3d767f8557'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - storage namespace openshift-kube-storage-version-migrator >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52eddfdb2ed25210c51d45df671b620eac44c88cff725ef8b60eb5fb66797b57 {'quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52eddfdb2ed25210c51d45df671b620eac44c88cff725ef8b60eb5fb66797b57'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - storage namespace openshift-kube-storage-version-migrator-operator >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:79f06ee470ae2c5a4833e3c12d992d67353ebfbb38e5dd57ad97d6944b4d97ad {'quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:79f06ee470ae2c5a4833e3c12d992d67353ebfbb38e5dd57ad97d6944b4d97ad'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - storage namespace openshift-local-storage >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image registry.redhat.io/openshift4/ose-local-storage-diskmaker@sha256:ba2b9250c4e033032429b0f57e4123daad23b6d55d1ddb15e1f7b2ffbb41e5b6 {'registry.redhat.io/openshift4/ose-local-storage-diskmaker@sha256:ba2b9250c4e033032429b0f57e4123daad23b6d55d1ddb15e1f7b2ffbb41e5b6'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:cb836456974e510eb4bccbffadbc6d99d5f57c36caec54c767a158ffd8a025d5 {'quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07709594c6a022711704ea48198f2b673803123f72c374d8a0ff67d4991b4365'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image registry.redhat.io/openshift4/ose-local-storage-operator@sha256:609b274214f1ca2883e7fa1f85707cad24871e7c05d8045f50c769ff67cd740a {'registry.redhat.io/openshift4/ose-local-storage-operator@sha256:0bf55a9885934a616f28ae9cb3f23a3c1989e4693de2a131dbd52c1658a47d63'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - storage namespace openshift-storage >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/openshift-ose-kube-rbac-proxy@sha256:cb836456974e510eb4bccbffadbc6d99d5f57c36caec54c767a158ffd8a025d5 {'quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07709594c6a022711704ea48198f2b673803123f72c374d8a0ff67d4991b4365'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/odf4-odf-csi-addons-rhel8-operator@sha256:07aecb7e66cc274baea123a91189c37cc99d0acaab54fb1bba2db6b97951e35b {'quay.io/rhceph-dev/odf4-odf-csi-addons-rhel8-operator@sha256:07aecb7e66cc274baea123a91189c37cc99d0acaab54fb1bba2db6b97951e35b'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/odf4-cephcsi-rhel8@sha256:1b902f6481a4a8d6ece15e34c4e64aae3e0497c046d6d7905d14b8a1e2a2c8f5 {'quay.io/rhceph-dev/odf4-cephcsi-rhel8@sha256:1b902f6481a4a8d6ece15e34c4e64aae3e0497c046d6d7905d14b8a1e2a2c8f5'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/openshift-ose-csi-node-driver-registrar@sha256:df29101fc9f8e47ee64d641a25dd527aa700240f6d97fdd79c4544ae0aa84553 {'quay.io/rhceph-dev/openshift-ose-csi-node-driver-registrar@sha256:15a9e21abc14b693f5912953be5242e5a348735902c47853670ad8513db294b0'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/openshift-ose-csi-external-attacher@sha256:d3a3f2005f5f0c58c8879f5d45048788edd50b31c0b4fc73605099b8a86b9967 {'quay.io/rhceph-dev/openshift-ose-csi-external-attacher@sha256:2114e5317978c2c1477227815d67a9644be932f539aae3d87febeb93bbe1bb05'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/openshift-ose-csi-external-provisioner@sha256:2b373ee7206b65636332e8c747847a9abed6a5d042b65a0febc52b6c167673c2 {'quay.io/rhceph-dev/openshift-ose-csi-external-provisioner@sha256:2b373ee7206b65636332e8c747847a9abed6a5d042b65a0febc52b6c167673c2'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/openshift-ose-csi-external-resizer@sha256:66bd5d06f5b452cccbf2fa18fd92aa90c5050d0bc2531cb1a7a66ec5ae6d7b28 {'quay.io/rhceph-dev/openshift-ose-csi-external-resizer@sha256:33f868b7334ef6d42446969b89db81d5aa5054506a47afdeb92af2ae9edf06d3'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/openshift-ose-csi-external-snapshotter@sha256:9ebf3c04583b5a477c8911e8305a23a049a79824962bd4087e86152c0c700582 {'quay.io/rhceph-dev/openshift-ose-csi-external-snapshotter@sha256:9ebf3c04583b5a477c8911e8305a23a049a79824962bd4087e86152c0c700582'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/odf4-odf-csi-addons-sidecar-rhel8@sha256:f5477d4842ac16690cecdd40eb9a7d5d08ad4d468e5a03c9de68eed2ba98cb4d {'quay.io/rhceph-dev/odf4-odf-csi-addons-sidecar-rhel8@sha256:e4928dffe9bb3bc0e6a5ba53f3d46699b6caec1397c74a7a5176af232fb81d36'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:d4c5835811499197cc171ac81b0f2ed3d7d732773c8bdeba49598513d75a93b6 {'quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:98ad2d2c44b70eb20a2a3c9a1d15c8d94e7907fb00fa62f92f5a3c9283a1a683'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/rhel8-postgresql-12@sha256:3d805540d777b09b4da6df99e7cddf9598d5ece4af9f6851721a9961df40f5a1 {'quay.io/rhceph-dev/rhel8-postgresql-12@sha256:33a9c6024b8181604b1b78e7e2ff191cf03014b72987124d54871c5767be18f8'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/odf4-mcg-rhel8-operator@sha256:16ef22df996c4f77e63761a96b843a74da35023af2c33ec1e0b72f16d7c81fd8 {'quay.io/rhceph-dev/odf4-mcg-rhel8-operator@sha256:16ef22df996c4f77e63761a96b843a74da35023af2c33ec1e0b72f16d7c81fd8'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/odf4-ocs-metrics-exporter-rhel8@sha256:bde51010f4dcf857e15748885f3fa0cae9efcefe68d2625505c7c38d892d1cd0 {'quay.io/rhceph-dev/odf4-ocs-metrics-exporter-rhel8@sha256:bde51010f4dcf857e15748885f3fa0cae9efcefe68d2625505c7c38d892d1cd0'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/odf4-ocs-rhel8-operator@sha256:dd19da7d420008f3d7544934b1b12abc1ec7dcd971edc006ebcdbe9e14c0a76b {'quay.io/rhceph-dev/odf4-ocs-rhel8-operator@sha256:dd19da7d420008f3d7544934b1b12abc1ec7dcd971edc006ebcdbe9e14c0a76b'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/odf4-odf-console-rhel8@sha256:da464fedc636cd39ea8b2a306427401ebc2b7ba31854fc50f0036ecd55fc8be7 {'quay.io/rhceph-dev/odf4-odf-console-rhel8@sha256:ccd81628688198098872cf64928865a2a7950f2675bca044ce5b1ea52d1e4de0'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/odf4-odf-rhel8-operator@sha256:64ad5456e46e04551488c0e0a3c10f02d6d1bef9a4e0273347b82acab9e8c101 {'quay.io/rhceph-dev/odf4-odf-rhel8-operator@sha256:1ddaf003e4169450676cdaf4fbf9810a4b72438a676a6d09c968e7a28790df4c'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/rhceph@sha256:957294824e1cbf89ca24a1a2aa2a8e8acd567cfb5a25535e2624989ad1046a60 {'quay.io/rhceph-dev/rhceph@sha256:46350a10b847f2304b9a4c763b8319e541ae367437893e597de227105e429558'} >23:50:35 - MainThread - ocs_ci.ocs.version - INFO - image quay.io/rhceph-dev/odf4-rook-ceph-rhel8-operator@sha256:2c02ee954580490f052c4f1fcad1d1de40ea706c61f200c48dd1365f8bd78968 {'quay.io/rhceph-dev/odf4-rook-ceph-rhel8-operator@sha256:2c02ee954580490f052c4f1fcad1d1de40ea706c61f200c48dd1365f8bd78968'} >23:50:35 - MainThread - tests.conftest - INFO - human readable ocs version info written into /root/openstack-upi/ocs_version.2023-02-16T23:50:35.301669 >23:50:35 - MainThread - tests.conftest - INFO - PagerDuty service is not created because platform from ['openshiftdedicated', 'rosa'] is not used >23:50:35 - MainThread - ocs_ci.utility.utils - INFO - testrun_name: OCS4-12-Downstream-OCP4-12-POWERVS-UPI-1AZ-RHCOS-LSO-3M-3W-tier4b >23:50:35 - MainThread - tests.conftest - INFO - Looking for RGW service to expose >23:50:35 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Service -n openshift-storage --selector=app=rook-ceph-rgw -o yaml >23:50:35 - MainThread - tests.conftest - INFO - Service rook-ceph-rgw-ocs-storagecluster-cephobjectstore found and will be exposed >23:50:35 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Route noobaa-mgmt -n openshift-storage -o yaml >23:50:35 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage expose service/rook-ceph-rgw-ocs-storagecluster-cephobjectstore --hostname rgw.router-default.apps.rdr-tier-test.ibm.com >23:50:35 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Route rook-ceph-rgw-ocs-storagecluster-cephobjectstore -n openshift-storage -o yaml >23:50:35 - MainThread - /root/ocs-upi-kvm/src/ocs-ci/ocs_ci/ocs/resources/cloud_manager.py - INFO - Trying to load credentials from ocs-ci-data. This flow is only relevant when running under OCS-QE environments. >23:50:35 - MainThread - /root/ocs-upi-kvm/src/ocs-ci/ocs_ci/utility/aws.py - INFO - Fetching authentication credentials from ocs-ci-data >23:50:37 - MainThread - /root/ocs-upi-kvm/src/ocs-ci/ocs_ci/utility/aws.py - WARNING - Failed to fetch auth.yaml from ocs-ci-data >23:50:37 - MainThread - /root/ocs-upi-kvm/src/ocs-ci/ocs_ci/ocs/resources/cloud_manager.py - WARNING - Failed to load credentials from ocs-ci-data. >Your local AWS credentials might be misconfigured. >Trying to load credentials from local auth.yaml instead >23:50:37 - MainThread - ocs_ci.utility.utils - INFO - Retrieving the authentication config dictionary >23:50:37 - MainThread - ocs_ci.ocs.resources.ocs - INFO - Adding Secret with name secret-cldmgr-ibm-cos-aed5ec3272d04daab7 >23:50:37 - MainThread - ocs_ci.utility.templating - INFO - apiVersion: v1 >data: > IBM_COS_ACCESS_KEY_ID: '*****' > IBM_COS_SECRET_ACCESS_KEY: '*****' >kind: Secret >metadata: > name: secret-cldmgr-ibm-cos-aed5ec3272d04daab7 > namespace: openshift-storage >type: Opaque > >23:50:37 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage create -f /tmp/Secret2vcdir0t -o yaml >23:50:37 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Secret secret-cldmgr-ibm-cos-aed5ec3272d04daab7 -n openshift-storage -o yaml >23:50:37 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get StorageCluster -n openshift-storage -o yaml >23:50:37 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get storageclass ocs-storagecluster-ceph-rgw -o yaml >23:50:38 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get storageclass ocs-storagecluster-ceph-rgw -o yaml >23:50:38 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get StorageCluster -n openshift-storage -o yaml >23:50:38 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Route rook-ceph-rgw-ocs-storagecluster-cephobjectstore -n openshift-storage -o yaml >23:50:38 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Secret rook-ceph-object-user-ocs-storagecluster-cephobjectstore-noobaa-ceph-objectstore-user -n openshift-storage -o yaml >23:50:38 - MainThread - ocs_ci.ocs.resources.ocs - INFO - Adding Secret with name secret-cldmgr-rgw-d954a5deaf974c9db0aebc >23:50:38 - MainThread - ocs_ci.utility.templating - INFO - apiVersion: v1 >data: > AWS_ACCESS_KEY_ID: '*****' > AWS_SECRET_ACCESS_KEY: '*****' >kind: Secret >metadata: > name: secret-cldmgr-rgw-d954a5deaf974c9db0aebc > namespace: openshift-storage >type: Opaque > >23:50:38 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage create -f /tmp/Secrete3nnbu48 -o yaml >23:50:38 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Secret secret-cldmgr-rgw-d954a5deaf974c9db0aebc -n openshift-storage -o yaml >23:50:38 - MainThread - tests.conftest - INFO - Checking for Ceph Health OK >23:50:38 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc wait --for condition=ready pod -l app=rook-ceph-tools -n openshift-storage --timeout=300s >23:50:38 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get pod -l 'app=rook-ceph-tools' -o jsonpath='{.items[0].metadata.name}' >23:50:38 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage exec rook-ceph-tools-65ffd8b86d-pn8wk -- ceph health >23:50:39 - MainThread - ocs_ci.utility.utils - INFO - Ceph cluster health is HEALTH_OK. >23:50:39 - MainThread - tests.conftest - INFO - Ceph health check passed at setup >23:50:39 - MainThread - ocs_ci.utility.utils - INFO - Executing command: ['oc', 'login', '-u', 'kubeadmin', '-p', '*****'] >23:50:40 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-monitoring whoami --show-token >23:50:40 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-monitoring get Route prometheus-k8s -n openshift-monitoring -o yaml >23:50:40 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get StorageCluster -n openshift-storage -o yaml >23:50:40 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get CephCluster -n openshift-storage -o yaml >23:50:40 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get CephFilesystem -n openshift-storage -o yaml >23:50:40 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get CephBlockPool -n openshift-storage -o yaml >23:50:40 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage -o yaml >23:50:43 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=app=rook-ceph-mon -o yaml >23:50:43 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod rook-ceph-mon-a-85bddf5f88-8jxkf -n openshift-storage >23:50:43 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage -o yaml >23:50:46 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod rook-ceph-mon-b-565949775b-zx7pl -n openshift-storage >23:50:46 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage -o yaml >23:50:49 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod rook-ceph-mon-c-6df78cddcd-kzbm8 -n openshift-storage >23:50:49 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage -o yaml >23:50:52 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=app=rook-ceph-mds -o yaml >23:50:52 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=app=rook-ceph-mgr -o yaml >23:50:52 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=app=rook-ceph-osd -o yaml >23:50:53 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=app=noobaa -o yaml >23:50:53 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=app=rook-ceph-rgw -o yaml >23:50:53 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=app=rook-ceph-tools -o yaml >23:50:53 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=app=rook-ceph-tools >23:50:53 - MainThread - ocs_ci.ocs.cluster - INFO - port=3300 >23:50:53 - MainThread - ocs_ci.ocs.cluster - INFO - port=3300 >23:50:53 - MainThread - ocs_ci.ocs.cluster - INFO - port=3300 >23:50:53 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get CephCluster ocs-storagecluster-cephcluster -n openshift-storage -o yaml >23:50:54 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get CephFilesystem ocs-storagecluster-cephfilesystem -n openshift-storage -o yaml >23:50:54 - MainThread - ocs_ci.ocs.cluster - INFO - Number of mons = 3 >23:50:54 - MainThread - ocs_ci.ocs.cluster - INFO - Number of mds = 2 >23:50:54 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-operator=deployment -o yaml >23:50:54 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-core=noobaa -o yaml >23:50:54 - MainThread - ocs_ci.ocs.ocp - INFO - Waiting for a resource(s) of kind Pod identified by name 'noobaa-operator-5d8bc99c6c-m6k2f' using selector None at column name STATUS to reach desired condition Running >23:50:54 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-operator-5d8bc99c6c-m6k2f -n openshift-storage -o yaml >23:50:54 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-operator-5d8bc99c6c-m6k2f -n openshift-storage >23:50:54 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage -o yaml >23:50:57 - MainThread - ocs_ci.ocs.ocp - INFO - status of noobaa-operator-5d8bc99c6c-m6k2f at STATUS reached condition! >23:50:57 - MainThread - ocs_ci.helpers.helpers - INFO - Pod noobaa-operator-5d8bc99c6c-m6k2f reached state Running >23:50:57 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-operator-5d8bc99c6c-m6k2f -n openshift-storage -o yaml >23:50:57 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh noobaa-operator-5d8bc99c6c-m6k2f bash -c "md5sum /usr/local/bin/noobaa-operator" >23:50:58 - MainThread - ocs_ci.ocs.resources.pod - INFO - md5sum of file /usr/local/bin/noobaa-operator: c75f752ebd98ec4ec0ac864d24a2e7e1 >23:50:58 - MainThread - /root/ocs-upi-kvm/src/ocs-ci/ocs_ci/ocs/resources/mcg.py - INFO - Remote noobaa cli md5 hash: c75f752ebd98ec4ec0ac864d24a2e7e1 >23:50:58 - MainThread - /root/ocs-upi-kvm/src/ocs-ci/ocs_ci/ocs/resources/mcg.py - INFO - Local noobaa cli md5 hash: c75f752ebd98ec4ec0ac864d24a2e7e1 >23:50:58 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-ingress-operator get secret router-ca -n openshift-ingress-operator -o yaml >23:50:58 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get noobaa -n openshift-storage -o yaml >23:50:59 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get secret noobaa-admin -n openshift-storage -o yaml >23:50:59 - MainThread - /root/ocs-upi-kvm/src/ocs-ci/ocs_ci/ocs/resources/mcg.py - INFO - Sending MCG RPC query: >auth_api create_auth {'role': 'admin', 'system': 'noobaa', 'email': 'admin@noobaa.io', 'password': 'FzDMqc8PxVfJ9Fity7HcwA=='} >23:50:59 - MainThread - ocs_ci.utility.utils - INFO - Executing command: sudo /usr/sbin/lsmod >23:50:59 - MainThread - ocs_ci.utility.powernodes - INFO - iskvm check: False >23:50:59 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node --selector=node-role.kubernetes.io/master -o yaml >23:50:59 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node --selector=node-role.kubernetes.io/worker -o yaml >23:50:59 - MainThread - ocs_ci.framework - INFO - Switched to cluster: ocstest >23:50:59 - MainThread - ocs_ci.utility.utils - INFO - Executing command: sudo /usr/sbin/lsmod >23:50:59 - MainThread - ocs_ci.utility.powernodes - INFO - iskvm check: False >23:50:59 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node --selector=node-role.kubernetes.io/master -o yaml >23:50:59 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node --selector=node-role.kubernetes.io/worker -o yaml >23:51:00 - MainThread - ocs_ci.framework - INFO - Switched to cluster: ocstest >23:51:00 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >-------------------------------- live log call --------------------------------- >23:51:00 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >23:51:00 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage -o yaml >23:51:00 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:01 - MainThread - tests.manage.mcg.test_host_node_failure - INFO - noobaa-db-pg-0 is running on lon06-worker-0.rdr-tier-test.ibm.com >23:51:01 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-operator=deployment -o yaml >23:51:01 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-operator-5d8bc99c6c-m6k2f -n openshift-storage -o yaml >23:51:01 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:01 - MainThread - tests.manage.mcg.test_host_node_failure - INFO - noobaa-operator-5d8bc99c6c-m6k2f is running on lon06-worker-1.rdr-tier-test.ibm.com >23:51:01 - MainThread - tests.manage.mcg.test_host_node_failure - INFO - noobaa-db-pg-0 and noobaa-operator-5d8bc99c6c-m6k2f are running on different node. >23:51:01 - MainThread - tests.manage.mcg.test_host_node_failure - INFO - Stopping lon06-worker-0.rdr-tier-test.ibm.com where noobaa-db-pg-0 is hosted >23:51:01 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@127.0.0.1 ssh core@192.168.0.210 sudo systemctl stop kubelet.service -f >23:51:01 - MainThread - ocs_ci.ocs.node - INFO - Waiting for nodes ['lon06-worker-0.rdr-tier-test.ibm.com'] to reach status NotReady >23:51:01 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:02 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:02 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:02 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >23:51:02 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - WARNING - Command stderr: Warning: Permanently added '127.0.0.1' (ECDSA) to the list of known hosts. > >23:51:02 - ThreadPoolExecutor-6_0 - ocs_ci.utility.service - INFO - Result of shutdown CompletedProcess(args=['ssh', '-o', 'StrictHostKeyChecking=no', '-o', 'UserKnownHostsFile=/dev/null', 'root@127.0.0.1', 'ssh', 'core@192.168.0.210', 'sudo', 'systemctl', 'stop', 'kubelet.service', '-f'], returncode=0, stdout=b'', stderr=b"Warning: Permanently added '127.0.0.1' (ECDSA) to the list of known hosts.\r\n"). Checking if service kubelet went down. >23:51:02 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@127.0.0.1 ssh core@192.168.0.210 sudo systemctl is-active kubelet.service >23:51:03 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - WARNING - Command stderr: Warning: Permanently added '127.0.0.1' (ECDSA) to the list of known hosts. > >23:51:03 - ThreadPoolExecutor-6_0 - ocs_ci.utility.service - INFO - Action succeeded. >23:51:03 - ThreadPoolExecutor-6_0 - ocs_ci.ocs.node - INFO - Waiting for nodes ['lon06-worker-0.rdr-tier-test.ibm.com'] to reach status NotReady >23:51:03 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:03 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:03 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:03 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >23:51:05 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:05 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:06 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:06 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >23:51:06 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:07 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:07 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:07 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >23:51:09 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:09 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:09 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:10 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >23:51:10 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:11 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:11 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:11 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >23:51:13 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:13 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:13 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:14 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >23:51:14 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:14 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:14 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:15 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >23:51:17 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:17 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:17 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:17 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >23:51:18 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:18 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:18 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:19 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >23:51:20 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:21 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:21 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:21 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >23:51:22 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:22 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:22 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:22 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >23:51:24 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:25 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:25 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:25 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >23:51:25 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:26 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:26 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:26 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >23:51:28 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:28 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:28 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:29 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >23:51:29 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:30 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:30 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:30 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >23:51:32 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:32 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:32 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:33 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >23:51:33 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:33 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:34 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:34 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >23:51:36 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:36 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:36 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:36 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >23:51:37 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:37 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:37 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:38 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >23:51:39 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:40 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:40 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:40 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >23:51:41 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:41 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:41 - ThreadPoolExecutor-6_0 - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:42 - ThreadPoolExecutor-6_0 - ocs_ci.ocs.node - INFO - Node lon06-worker-0.rdr-tier-test.ibm.com reached status NotReady >23:51:42 - ThreadPoolExecutor-6_0 - ocs_ci.ocs.node - INFO - The following nodes reached status NotReady: ['lon06-worker-0.rdr-tier-test.ibm.com'] >23:51:42 - ThreadPoolExecutor-6_0 - ocs_ci.utility.powernodes - INFO - Waiting for 360 seconds >23:51:43 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:51:44 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >23:51:44 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >23:51:44 - MainThread - ocs_ci.ocs.node - INFO - Node lon06-worker-0.rdr-tier-test.ibm.com reached status NotReady >23:51:44 - MainThread - ocs_ci.ocs.node - INFO - The following nodes reached status NotReady: ['lon06-worker-0.rdr-tier-test.ibm.com'] >23:51:44 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage delete Pod noobaa-operator-5d8bc99c6c-m6k2f --grace-period=0 --force >23:51:44 - MainThread - ocs_ci.utility.utils - WARNING - Command stderr: Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. > >23:57:42 - MainThread - ocs_ci.ocs.ocp - INFO - Waiting for a resource(s) of kind Pod identified by name '' using selector noobaa-operator=deployment at column name STATUS to reach desired condition Running >23:57:42 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-operator=deployment -o yaml >23:57:42 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-operator-5d8bc99c6c-hm4ch -n openshift-storage >23:57:42 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage -o yaml >23:57:45 - MainThread - ocs_ci.ocs.ocp - INFO - 1 resources already reached condition! >23:57:45 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >23:57:46 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage -o yaml >23:57:46 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >23:57:46 - MainThread - tests.manage.mcg.test_host_node_failure - INFO - noobaa-db-pg-0 has been rescheduled on lon06-worker-2.rdr-tier-test.ibm.com >23:57:46 - MainThread - ocs_ci.ocs.ocp - INFO - Waiting for a resource(s) of kind Pod identified by name '' using selector noobaa-db=postgres at column name STATUS to reach desired condition Running >23:57:46 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >23:57:46 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >23:57:46 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >23:57:46 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >23:58:16 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >23:58:17 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >23:58:17 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >23:58:17 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >23:58:47 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >23:58:47 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >23:58:47 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >23:58:47 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >23:59:17 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >23:59:17 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >23:59:17 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >23:59:17 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >23:59:47 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >23:59:47 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >23:59:47 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >23:59:47 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:00:17 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:00:18 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:00:18 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:00:18 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:00:48 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:00:48 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:00:48 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:00:48 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:01:18 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:01:18 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:01:18 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:01:18 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:01:48 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:01:48 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:01:48 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:01:48 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:02:18 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:02:19 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:02:19 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:02:19 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:02:49 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:02:49 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:02:49 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:02:49 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:03:19 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:03:19 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:03:19 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:03:19 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:03:49 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:03:49 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:03:49 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:03:49 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:04:19 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:04:20 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:04:20 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:04:20 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:04:50 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:04:50 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:04:50 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:04:50 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:05:20 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:05:20 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:05:20 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:05:20 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:05:50 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:05:50 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:05:50 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:05:50 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:06:21 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:06:21 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:06:21 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:06:21 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:06:51 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:06:51 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:06:51 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:06:51 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:07:21 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:07:21 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:07:21 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:07:21 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:07:51 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:07:51 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:07:51 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:07:51 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:08:22 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:08:22 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:08:22 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:08:22 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:08:52 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:08:52 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:08:52 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:08:52 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:09:22 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:09:22 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:09:22 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:09:22 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:09:52 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:09:52 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:09:53 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:09:53 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:10:23 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:10:23 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:10:23 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:10:23 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:10:53 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:10:53 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod noobaa-db-pg-0 -n openshift-storage >00:10:53 - MainThread - ocs_ci.ocs.ocp - INFO - status of at column STATUS - item(s) were ['Init:0/2'], but we were waiting for all 1 of them to be Running >00:10:53 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 30 seconds before next iteration >00:11:23 - MainThread - ocs_ci.ocs.ocp - ERROR - timeout expired: Timed out after 800s running get("", True, "noobaa-db=postgres") >00:11:23 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage describe Pod --selector=noobaa-db=postgres >00:11:23 - MainThread - ocs_ci.ocs.ocp - WARNING - Description of the resource(s) we were waiting for: >Name: noobaa-db-pg-0 >Namespace: openshift-storage >Priority: 0 >Service Account: noobaa-db >Node: lon06-worker-2.rdr-tier-test.ibm.com/192.168.0.191 >Start Time: Thu, 16 Feb 2023 23:51:41 -0500 >Labels: app=noobaa > controller-revision-hash=noobaa-db-pg-7497d7cc46 > noobaa-db=postgres > statefulset.kubernetes.io/pod-name=noobaa-db-pg-0 >Annotations: k8s.ovn.org/pod-networks: > {"default":{"ip_addresses":["10.128.2.31/23"],"mac_address":"0a:58:0a:80:02:1f","gateway_ips":["10.128.2.1"],"ip_address":"10.128.2.31/23"... > openshift.io/scc: noobaa-db >Status: Pending >IP: >IPs: <none> >Controlled By: StatefulSet/noobaa-db-pg >Init Containers: > init: > Container ID: > Image: quay.io/rhceph-dev/odf4-mcg-core-rhel8@sha256:d4c5835811499197cc171ac81b0f2ed3d7d732773c8bdeba49598513d75a93b6 > Image ID: > Port: <none> > Host Port: <none> > Command: > /noobaa_init_files/noobaa_init.sh > init_postgres > State: Waiting > Reason: PodInitializing > Ready: False > Restart Count: 0 > Limits: > cpu: 500m > memory: 500Mi > Requests: > cpu: 500m > memory: 500Mi > Environment: <none> > Mounts: > /var/lib/pgsql from db (rw) > /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mnf6p (ro) > initialize-database: > Container ID: > Image: quay.io/rhceph-dev/rhel8-postgresql-12@sha256:3d805540d777b09b4da6df99e7cddf9598d5ece4af9f6851721a9961df40f5a1 > Image ID: > Port: <none> > Host Port: <none> > Command: > sh > -x > /init/initdb.sh > State: Waiting > Reason: PodInitializing > Ready: False > Restart Count: 0 > Limits: > cpu: 500m > memory: 500Mi > Requests: > cpu: 500m > memory: 500Mi > Environment: > POSTGRESQL_DATABASE: nbcore > LC_COLLATE: C > POSTGRESQL_USER: <set to the key 'user' in secret 'noobaa-db'> Optional: false > POSTGRESQL_PASSWORD: <set to the key 'password' in secret 'noobaa-db'> Optional: false > Mounts: > /init from noobaa-postgres-initdb-sh-volume (rw) > /var/lib/pgsql from db (rw) > /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mnf6p (ro) >Containers: > db: > Container ID: > Image: quay.io/rhceph-dev/rhel8-postgresql-12@sha256:3d805540d777b09b4da6df99e7cddf9598d5ece4af9f6851721a9961df40f5a1 > Image ID: > Port: 5432/TCP > Host Port: 0/TCP > State: Waiting > Reason: PodInitializing > Ready: False > Restart Count: 0 > Limits: > cpu: 500m > memory: 4Gi > Requests: > cpu: 500m > memory: 4Gi > Environment: > POSTGRESQL_DATABASE: nbcore > LC_COLLATE: C > POSTGRESQL_USER: <set to the key 'user' in secret 'noobaa-db'> Optional: false > POSTGRESQL_PASSWORD: <set to the key 'password' in secret 'noobaa-db'> Optional: false > Mounts: > /init from noobaa-postgres-initdb-sh-volume (rw) > /opt/app-root/src/postgresql-cfg from noobaa-postgres-config-volume (rw) > /var/lib/pgsql from db (rw) > /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mnf6p (ro) >Conditions: > Type Status > Initialized False > Ready False > ContainersReady False > PodScheduled True >Volumes: > db: > Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) > ClaimName: db-noobaa-db-pg-0 > ReadOnly: false > noobaa-postgres-config-volume: > Type: ConfigMap (a volume populated by a ConfigMap) > Name: noobaa-postgres-config > Optional: false > noobaa-postgres-initdb-sh-volume: > Type: ConfigMap (a volume populated by a ConfigMap) > Name: noobaa-postgres-initdb-sh > Optional: false > kube-api-access-mnf6p: > Type: Projected (a volume that contains injected data from multiple sources) > TokenExpirationSeconds: 3607 > ConfigMapName: kube-root-ca.crt > ConfigMapOptional: <nil> > DownwardAPI: true > ConfigMapName: openshift-service-ca.crt > ConfigMapOptional: <nil> >QoS Class: Guaranteed >Node-Selectors: <none> >Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists > node.kubernetes.io/not-ready:NoExecute op=Exists for 300s > node.kubernetes.io/unreachable:NoExecute op=Exists for 300s > node.ocs.openshift.io/storage=true:NoSchedule >Events: > Type Reason Age From Message > ---- ------ ---- ---- ------- > Normal Scheduled 19m default-scheduler Successfully assigned openshift-storage/noobaa-db-pg-0 to lon06-worker-2.rdr-tier-test.ibm.com > Warning FailedAttachVolume 19m attachdetach-controller Multi-Attach error for volume "pvc-a6e32e33-cd14-41cf-a2cd-f74bcadcb71d" Volume is already exclusively attached to one node and can't be attached to another > Normal SuccessfulAttachVolume 13m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-a6e32e33-cd14-41cf-a2cd-f74bcadcb71d" > Warning FailedMount 11m (x2 over 13m) kubelet Unable to attach or mount volumes: unmounted volumes=[db], unattached volumes=[kube-api-access-mnf6p noobaa-postgres-initdb-sh-volume noobaa-postgres-config-volume db]: timed out waiting for the condition > Warning FailedMount 3m15s (x5 over 17m) kubelet Unable to attach or mount volumes: unmounted volumes=[db], unattached volumes=[db kube-api-access-mnf6p noobaa-postgres-initdb-sh-volume noobaa-postgres-config-volume]: timed out waiting for the condition > Warning FailedMount 77s (x10 over 12m) kubelet MountVolume.MountDevice failed for volume "pvc-a6e32e33-cd14-41cf-a2cd-f74bcadcb71d" : rpc error: code = Internal desc = rbd image ocs-storagecluster-cephblockpool/csi-vol-79a70d59-fbdf-40b5-9759-2d3386c4a885 is still being used > Warning FailedMount 71s (x2 over 7m22s) kubelet Unable to attach or mount volumes: unmounted volumes=[db], unattached volumes=[noobaa-postgres-config-volume db kube-api-access-mnf6p noobaa-postgres-initdb-sh-volume]: timed out waiting for the condition > >00:11:23 - MainThread - ocs_ci.ocs.ocp - ERROR - Wait for Pod resource at column STATUS to reach desired condition Running failed, last actual status was ['Init:0/2'] >00:11:23 - MainThread - ocs_ci.ocs.utils - INFO - Must gather image: quay.io/rhceph-dev/ocs-must-gather:latest-4.12 will be used. >00:11:23 - MainThread - ocs_ci.ocs.utils - INFO - OCS logs will be placed in location /root/logs-ocs-ci/4.12/failed_testcase_ocs_logs_1676609418/test_noobaa_sts_host_node_failure[noobaa-db-pg-True]_ocs_logs/ocs_must_gather >00:11:23 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather:latest-4.12 --dest-dir=/root/logs-ocs-ci/4.12/failed_testcase_ocs_logs_1676609418/test_noobaa_sts_host_node_failure[noobaa-db-pg-True]_ocs_logs/ocs_must_gather >00:18:05 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:18:05 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > nbcore.gz" >00:18:05 - MainThread - ocs_ci.utility.utils - WARNING - Command stderr: Defaulted container "db" out of: db, init (init), initialize-database (init) >error: unable to upgrade connection: container not found ("db") > >00:18:05 - MainThread - ocs_ci.ocs.utils - ERROR - Failed to dump noobaa DB! Error: Error during execution of command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > nbcore.gz". >Error is Defaulted container "db" out of: db, init (init), initialize-database (init) >error: unable to upgrade connection: container not found ("db") > >00:18:35 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:18:35 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > nbcore.gz" >00:18:35 - MainThread - ocs_ci.utility.utils - WARNING - Command stderr: Defaulted container "db" out of: db, init (init), initialize-database (init) >error: unable to upgrade connection: container not found ("db") > >00:18:35 - MainThread - ocs_ci.ocs.utils - ERROR - Failed to dump noobaa DB! Error: Error during execution of command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > nbcore.gz". >Error is Defaulted container "db" out of: db, init (init), initialize-database (init) >error: unable to upgrade connection: container not found ("db") > >00:19:06 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:19:06 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > nbcore.gz" >00:19:06 - MainThread - ocs_ci.utility.utils - WARNING - Command stderr: Defaulted container "db" out of: db, init (init), initialize-database (init) >error: unable to upgrade connection: container not found ("db") > >00:19:06 - MainThread - ocs_ci.ocs.utils - ERROR - Failed to dump noobaa DB! Error: Error during execution of command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > nbcore.gz". >Error is Defaulted container "db" out of: db, init (init), initialize-database (init) >error: unable to upgrade connection: container not found ("db") > >00:19:36 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:19:36 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > nbcore.gz" >00:19:36 - MainThread - ocs_ci.utility.utils - WARNING - Command stderr: Defaulted container "db" out of: db, init (init), initialize-database (init) >error: unable to upgrade connection: container not found ("db") > >00:19:36 - MainThread - ocs_ci.ocs.utils - ERROR - Failed to dump noobaa DB! Error: Error during execution of command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > nbcore.gz". >Error is Defaulted container "db" out of: db, init (init), initialize-database (init) >error: unable to upgrade connection: container not found ("db") > >00:20:06 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:20:06 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > nbcore.gz" >00:20:06 - MainThread - ocs_ci.utility.utils - WARNING - Command stderr: Defaulted container "db" out of: db, init (init), initialize-database (init) >error: unable to upgrade connection: container not found ("db") > >00:20:06 - MainThread - ocs_ci.ocs.utils - ERROR - Failed to dump noobaa DB! Error: Error during execution of command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > nbcore.gz". >Error is Defaulted container "db" out of: db, init (init), initialize-database (init) >error: unable to upgrade connection: container not found ("db") > >FAILED >------------------------------ live log teardown ------------------------------- >00:20:36 - MainThread - ocs_ci.framework - INFO - Switched to cluster: ocstest >00:20:36 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >00:20:37 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-master-0.rdr-tier-test.ibm.com >00:20:37 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >00:20:37 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-master-1.rdr-tier-test.ibm.com >00:20:37 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >00:20:38 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-master-2.rdr-tier-test.ibm.com >00:20:38 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >00:20:38 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >00:20:38 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >00:20:39 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-1.rdr-tier-test.ibm.com >00:20:39 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >00:20:39 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-2.rdr-tier-test.ibm.com >00:20:39 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >00:20:39 - MainThread - ocs_ci.ocs.platform_nodes - INFO - The following PowerNodes are powered off: [<ocs_ci.ocs.resources.ocs.OCS object at 0x7fff7fba3460>] >00:20:39 - MainThread - ocs_ci.utility.utils - INFO - Executing command: ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@127.0.0.1 ssh core@192.168.0.210 sudo systemctl start kubelet.service >00:20:43 - MainThread - ocs_ci.utility.utils - WARNING - Command stderr: Warning: Permanently added '127.0.0.1' (ECDSA) to the list of known hosts. > >00:20:43 - MainThread - ocs_ci.utility.service - INFO - Result of start of service kubelet is CompletedProcess(args=['ssh', '-o', 'StrictHostKeyChecking=no', '-o', 'UserKnownHostsFile=/dev/null', 'root@127.0.0.1', 'ssh', 'core@192.168.0.210', 'sudo', 'systemctl', 'start', 'kubelet.service'], returncode=0, stdout=b'', stderr=b"Warning: Permanently added '127.0.0.1' (ECDSA) to the list of known hosts.\r\n") >00:20:43 - MainThread - ocs_ci.utility.utils - INFO - Executing command: ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root@127.0.0.1 ssh core@192.168.0.210 sudo systemctl is-active kubelet.service >00:20:43 - MainThread - ocs_ci.utility.utils - WARNING - Command stderr: Warning: Permanently added '127.0.0.1' (ECDSA) to the list of known hosts. > >00:20:43 - MainThread - ocs_ci.utility.service - INFO - Action succeeded. >00:20:43 - MainThread - ocs_ci.ocs.ocp - INFO - Waiting for cluster connectivity >00:20:43 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Service -o yaml >00:20:43 - MainThread - ocs_ci.ocs.node - INFO - Waiting for nodes ['lon06-worker-0.rdr-tier-test.ibm.com'] to reach status Ready >00:20:43 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >00:20:44 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >00:20:44 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >00:20:44 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >00:20:47 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >00:20:48 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >00:20:48 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >00:20:48 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >00:20:51 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >00:20:51 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >00:20:52 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >00:20:52 - MainThread - ocs_ci.utility.utils - INFO - Going to sleep for 3 seconds before next iteration >00:20:55 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >00:20:55 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >00:20:55 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >00:20:56 - MainThread - ocs_ci.ocs.node - INFO - Node lon06-worker-0.rdr-tier-test.ibm.com reached status Ready >00:20:56 - MainThread - ocs_ci.ocs.node - INFO - The following nodes reached status Ready: ['lon06-worker-0.rdr-tier-test.ibm.com'] >00:20:56 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >00:20:56 - MainThread - ocs_ci.ocs.node - INFO - Waiting for nodes ['lon06-master-0.rdr-tier-test.ibm.com', 'lon06-master-1.rdr-tier-test.ibm.com', 'lon06-master-2.rdr-tier-test.ibm.com', 'lon06-worker-0.rdr-tier-test.ibm.com', 'lon06-worker-1.rdr-tier-test.ibm.com', 'lon06-worker-2.rdr-tier-test.ibm.com'] to reach status Ready >00:20:56 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get node -o yaml >00:20:57 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-master-0.rdr-tier-test.ibm.com >00:20:57 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >00:20:57 - MainThread - ocs_ci.ocs.node - INFO - Node lon06-master-0.rdr-tier-test.ibm.com reached status Ready >00:20:57 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-master-1.rdr-tier-test.ibm.com >00:20:57 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >00:20:57 - MainThread - ocs_ci.ocs.node - INFO - Node lon06-master-1.rdr-tier-test.ibm.com reached status Ready >00:20:57 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-master-2.rdr-tier-test.ibm.com >00:20:58 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >00:20:58 - MainThread - ocs_ci.ocs.node - INFO - Node lon06-master-2.rdr-tier-test.ibm.com reached status Ready >00:20:58 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-0.rdr-tier-test.ibm.com >00:20:58 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >00:20:58 - MainThread - ocs_ci.ocs.node - INFO - Node lon06-worker-0.rdr-tier-test.ibm.com reached status Ready >00:20:58 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-1.rdr-tier-test.ibm.com >00:20:58 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >00:20:59 - MainThread - ocs_ci.ocs.node - INFO - Node lon06-worker-1.rdr-tier-test.ibm.com reached status Ready >00:20:59 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node lon06-worker-2.rdr-tier-test.ibm.com >00:20:59 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc get Node -o yaml >00:20:59 - MainThread - ocs_ci.ocs.node - INFO - Node lon06-worker-2.rdr-tier-test.ibm.com reached status Ready >00:20:59 - MainThread - ocs_ci.ocs.node - INFO - The following nodes reached status Ready: ['lon06-master-0.rdr-tier-test.ibm.com', 'lon06-master-1.rdr-tier-test.ibm.com', 'lon06-master-2.rdr-tier-test.ibm.com', 'lon06-worker-0.rdr-tier-test.ibm.com', 'lon06-worker-1.rdr-tier-test.ibm.com', 'lon06-worker-2.rdr-tier-test.ibm.com'] >00:21:00 - MainThread - tests.conftest - WARNING - During test were raised new alerts >00:21:00 - MainThread - tests.conftest - WARNING - [{'labels': {'alertname': 'PodDisruptionBudgetAtLimit', 'namespace': 'openshift-storage', 'poddisruptionbudget': 'rook-ceph-mon-pdb', 'severity': 'warning'}, 'annotations': {'description': 'The pod disruption budget is at the minimum disruptions allowed level. The number of current healthy pods is equal to the desired healthy pods.', 'runbook_url': 'https://github.com/openshift/runbooks/blob/master/alerts/cluster-kube-controller-manager-operator/PodDisruptionBudgetAtLimit.md', 'summary': 'The pod disruption budget is preventing further disruption to pods.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:51:58.457429033Z', 'value': '2e+00'}, {'labels': {'alertname': 'PodDisruptionBudgetAtLimit', 'namespace': 'openshift-storage', 'poddisruptionbudget': 'rook-ceph-osd', 'severity': 'warning'}, 'annotations': {'description': 'The pod disruption budget is at the minimum disruptions allowed level. The number of current healthy pods is equal to the desired healthy pods.', 'runbook_url': 'https://github.com/openshift/runbooks/blob/master/alerts/cluster-kube-controller-manager-operator/PodDisruptionBudgetAtLimit.md', 'summary': 'The pod disruption budget is preventing further disruption to pods.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:51:58.457429033Z', 'value': '2e+00'}, {'labels': {'alertname': 'PodDisruptionBudgetAtLimit', 'namespace': 'openshift-monitoring', 'poddisruptionbudget': 'alertmanager-main', 'severity': 'warning'}, 'annotations': {'description': 'The pod disruption budget is at the minimum disruptions allowed level. The number of current healthy pods is equal to the desired healthy pods.', 'runbook_url': 'https://github.com/openshift/runbooks/blob/master/alerts/cluster-kube-controller-manager-operator/PodDisruptionBudgetAtLimit.md', 'summary': 'The pod disruption budget is preventing further disruption to pods.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:51:58.457429033Z', 'value': '1e+00'}, {'labels': {'alertname': 'PodDisruptionBudgetAtLimit', 'namespace': 'openshift-monitoring', 'poddisruptionbudget': 'prometheus-k8s', 'severity': 'warning'}, 'annotations': {'description': 'The pod disruption budget is at the minimum disruptions allowed level. The number of current healthy pods is equal to the desired healthy pods.', 'runbook_url': 'https://github.com/openshift/runbooks/blob/master/alerts/cluster-kube-controller-manager-operator/PodDisruptionBudgetAtLimit.md', 'summary': 'The pod disruption budget is preventing further disruption to pods.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:51:58.457429033Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubeletHealthState', 'container': 'oauth-proxy', 'endpoint': 'metrics', 'instance': '192.168.0.210:9001', 'job': 'machine-config-daemon', 'namespace': 'openshift-machine-config-operator', 'node': 'lon06-worker-0.rdr-tier-test.ibm.com', 'pod': 'machine-config-daemon-g5ntt', 'service': 'machine-config-daemon', 'severity': 'warning'}, 'annotations': {'message': 'Kubelet health failure threshold reached'}, 'state': 'firing', 'activeAt': '2023-02-17T04:52:30.269758637Z', 'value': '5.9e+01'}, {'labels': {'alertname': 'KubeNodeNotReady', 'condition': 'Ready', 'container': 'kube-rbac-proxy-main', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-monitoring', 'node': 'lon06-worker-0.rdr-tier-test.ibm.com', 'service': 'kube-state-metrics', 'severity': 'warning', 'status': 'true'}, 'annotations': {'description': 'lon06-worker-0.rdr-tier-test.ibm.com has been unready for more than 15 minutes.', 'runbook_url': 'https://github.com/openshift/runbooks/blob/master/alerts/cluster-monitoring-operator/KubeNodeNotReady.md', 'summary': 'Node is not ready.'}, 'state': 'firing', 'activeAt': '2023-02-17T04:51:57.703862256Z', 'value': '0e+00'}, {'labels': {'alertname': 'KubeNodeUnreachable', 'container': 'kube-rbac-proxy-main', 'effect': 'NoSchedule', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'key': 'node.kubernetes.io/unreachable', 'namespace': 'openshift-monitoring', 'node': 'lon06-worker-0.rdr-tier-test.ibm.com', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': 'lon06-worker-0.rdr-tier-test.ibm.com is unreachable and some workloads may be rescheduled.', 'summary': 'Node is unreachable.'}, 'state': 'firing', 'activeAt': '2023-02-17T04:51:57.703862256Z', 'value': '1e+00'}, {'labels': {'alertname': 'TargetDown', 'job': 'kubelet', 'namespace': 'kube-system', 'service': 'kubelet', 'severity': 'warning'}, 'annotations': {'description': '11.11% of the kubelet/kubelet targets in kube-system namespace have been unreachable for more than 15 minutes. This may be a symptom of network connectivity issues, down nodes, or failures within these components. Assess the health of the infrastructure and nodes running these targets and then contact support.', 'summary': 'Some targets were not reachable from the monitoring server for an extended period of time.'}, 'state': 'firing', 'activeAt': '2023-02-17T04:51:20.126983224Z', 'value': '1.111111111111111e+01'}, {'labels': {'alertname': 'TargetDown', 'job': 'noobaa-mgmt', 'namespace': 'openshift-storage', 'service': 'noobaa-mgmt', 'severity': 'warning'}, 'annotations': {'description': '100% of the noobaa-mgmt/noobaa-mgmt targets in openshift-storage namespace have been unreachable for more than 15 minutes. This may be a symptom of network connectivity issues, down nodes, or failures within these components. Assess the health of the infrastructure and nodes running these targets and then contact support.', 'summary': 'Some targets were not reachable from the monitoring server for an extended period of time.'}, 'state': 'firing', 'activeAt': '2023-02-17T04:52:50.126983224Z', 'value': '1e+02'}, {'labels': {'alertname': 'PodStartupStorageOperationsFailing', 'endpoint': 'https-metrics', 'instance': '192.168.0.191:10250', 'job': 'kubelet', 'metrics_path': '/metrics', 'migrated': 'false', 'namespace': 'kube-system', 'node': 'lon06-worker-2.rdr-tier-test.ibm.com', 'operation_name': 'volume_mount', 'service': 'kubelet', 'severity': 'info', 'status': 'fail-unknown', 'volume_plugin': 'kubernetes.io/csi:openshift-storage.rbd.csi.ceph.com'}, 'annotations': {'description': 'Failing storage operation "volume_mount" of volume plugin kubernetes.io/csi:openshift-storage.rbd.csi.ceph.com was preventing Pods on node lon06-worker-2.rdr-tier-test.ibm.com\nfrom starting for past 5 minutes.\nPlease investigate Pods that are "ContainerCreating" on the node: "oc get pod --field-selector=spec.nodeName=ip-10-0-130-168.ec2.internal --all-namespaces | grep ContainerCreating".\nEvents of the Pods should contain exact error message: "oc describe pod -n <pod namespace> <pod name>".\n', 'summary': "Pods can't start because volume_mount of volume plugin kubernetes.io/csi:openshift-storage.rbd.csi.ceph.com is permanently failing on node lon06-worker-2.rdr-tier-test.ibm.com."}, 'state': 'firing', 'activeAt': '2023-02-17T04:59:55.877066264Z', 'value': '2.2222222222222223e+00'}, {'labels': {'alertname': 'ClusterNotUpgradeable', 'condition': 'Upgradeable', 'endpoint': 'metrics', 'name': 'version', 'namespace': 'openshift-cluster-version', 'severity': 'info'}, 'annotations': {'description': "In most cases, you will still be able to apply patch releases. Reason PoolUpdating. For more information refer to 'oc adm upgrade' or https://console-openshift-console.apps.rdr-tier-test.ibm.com/settings/cluster/.", 'summary': 'One or more cluster operators have been blocking minor version cluster upgrades for at least an hour.'}, 'state': 'pending', 'activeAt': '2023-02-17T05:04:03.30199667Z', 'value': '0e+00'}, {'labels': {'alertname': 'ClusterOperatorDown', 'name': 'machine-config', 'namespace': 'openshift-cluster-version', 'severity': 'critical'}, 'annotations': {'description': "The machine-config operator may be down or disabled, and the components it manages may be unavailable or degraded. Cluster upgrades may not complete. For more information refer to 'oc get -o yaml clusteroperator machine-config' or https://console-openshift-console.apps.rdr-tier-test.ibm.com/settings/cluster/.", 'summary': 'Cluster operator has not been available for 10 minutes.'}, 'state': 'firing', 'activeAt': '2023-02-17T05:02:33.30199667Z', 'value': '0e+00'}, {'labels': {'alertname': 'ClusterOperatorDown', 'name': 'monitoring', 'namespace': 'openshift-cluster-version', 'severity': 'critical'}, 'annotations': {'description': "The monitoring operator may be down or disabled, and the components it manages may be unavailable or degraded. Cluster upgrades may not complete. For more information refer to 'oc get -o yaml clusteroperator monitoring' or https://console-openshift-console.apps.rdr-tier-test.ibm.com/settings/cluster/.", 'summary': 'Cluster operator has not been available for 10 minutes.'}, 'state': 'firing', 'activeAt': '2023-02-17T05:07:33.30199667Z', 'value': '0e+00'}, {'labels': {'alertname': 'ClusterOperatorDegraded', 'name': 'monitoring', 'namespace': 'openshift-cluster-version', 'reason': 'MultipleTasksFailed', 'severity': 'warning'}, 'annotations': {'description': "The monitoring operator is degraded because MultipleTasksFailed, and the components it manages may have reduced quality of service. Cluster upgrades may not complete. For more information refer to 'oc get -o yaml clusteroperator monitoring' or https://console-openshift-console.apps.rdr-tier-test.ibm.com/settings/cluster/.", 'summary': 'Cluster operator has been degraded for 30 minutes.'}, 'state': 'pending', 'activeAt': '2023-02-17T05:07:33.30199667Z', 'value': '1e+00'}, {'labels': {'alertname': 'ClusterOperatorDegraded', 'name': 'machine-config', 'namespace': 'openshift-cluster-version', 'reason': 'MachineConfigDaemonFailed', 'severity': 'warning'}, 'annotations': {'description': "The machine-config operator is degraded because MachineConfigDaemonFailed, and the components it manages may have reduced quality of service. Cluster upgrades may not complete. For more information refer to 'oc get -o yaml clusteroperator machine-config' or https://console-openshift-console.apps.rdr-tier-test.ibm.com/settings/cluster/.", 'summary': 'Cluster operator has been degraded for 30 minutes.'}, 'state': 'pending', 'activeAt': '2023-02-17T05:02:33.30199667Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubePodNotReady', 'namespace': 'openshift-storage', 'pod': 'noobaa-db-pg-0', 'severity': 'warning'}, 'annotations': {'description': 'Pod openshift-storage/noobaa-db-pg-0 has been in a non-ready state for longer than 15 minutes.', 'runbook_url': 'https://github.com/openshift/runbooks/blob/master/alerts/cluster-monitoring-operator/KubePodNotReady.md', 'summary': 'Pod has been in a non-ready state for more than 15 minutes.'}, 'state': 'firing', 'activeAt': '2023-02-17T04:52:15.416054193Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubeStatefulSetReplicasMismatch', 'container': 'kube-rbac-proxy-main', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-monitoring', 'service': 'kube-state-metrics', 'severity': 'warning', 'statefulset': 'prometheus-k8s'}, 'annotations': {'description': 'StatefulSet openshift-monitoring/prometheus-k8s has not matched the expected number of replicas for longer than 15 minutes.', 'summary': 'Deployment has not matched the expected number of replicas.'}, 'state': 'pending', 'activeAt': '2023-02-17T05:06:15.416054193Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubeStatefulSetReplicasMismatch', 'container': 'kube-rbac-proxy-main', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-storage', 'service': 'kube-state-metrics', 'severity': 'warning', 'statefulset': 'noobaa-db-pg'}, 'annotations': {'description': 'StatefulSet openshift-storage/noobaa-db-pg has not matched the expected number of replicas for longer than 15 minutes.', 'summary': 'Deployment has not matched the expected number of replicas.'}, 'state': 'firing', 'activeAt': '2023-02-17T04:52:15.416054193Z', 'value': '0e+00'}, {'labels': {'alertname': 'KubeStatefulSetReplicasMismatch', 'container': 'kube-rbac-proxy-main', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-monitoring', 'service': 'kube-state-metrics', 'severity': 'warning', 'statefulset': 'alertmanager-main'}, 'annotations': {'description': 'StatefulSet openshift-monitoring/alertmanager-main has not matched the expected number of replicas for longer than 15 minutes.', 'summary': 'Deployment has not matched the expected number of replicas.'}, 'state': 'pending', 'activeAt': '2023-02-17T05:06:15.416054193Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubeDaemonSetRolloutStuck', 'container': 'kube-rbac-proxy-main', 'daemonset': 'diskmaker-manager', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-local-storage', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': 'DaemonSet openshift-local-storage/diskmaker-manager has not finished or progressed for at least 30 minutes.', 'summary': 'DaemonSet rollout is stuck.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:56:15.416054193Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubeDaemonSetRolloutStuck', 'container': 'kube-rbac-proxy-main', 'daemonset': 'ingress-canary', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-ingress-canary', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': 'DaemonSet openshift-ingress-canary/ingress-canary has not finished or progressed for at least 30 minutes.', 'summary': 'DaemonSet rollout is stuck.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:56:15.416054193Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubeDaemonSetRolloutStuck', 'container': 'kube-rbac-proxy-main', 'daemonset': 'network-metrics-daemon', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-multus', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': 'DaemonSet openshift-multus/network-metrics-daemon has not finished or progressed for at least 30 minutes.', 'summary': 'DaemonSet rollout is stuck.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:52:15.416054193Z', 'value': '5e+00'}, {'labels': {'alertname': 'KubeDaemonSetRolloutStuck', 'container': 'kube-rbac-proxy-main', 'daemonset': 'ovnkube-node', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-ovn-kubernetes', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': 'DaemonSet openshift-ovn-kubernetes/ovnkube-node has not finished or progressed for at least 30 minutes.', 'summary': 'DaemonSet rollout is stuck.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:52:15.416054193Z', 'value': '5e+00'}, {'labels': {'alertname': 'KubeDaemonSetRolloutStuck', 'container': 'kube-rbac-proxy-main', 'daemonset': 'machine-config-daemon', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-machine-config-operator', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': 'DaemonSet openshift-machine-config-operator/machine-config-daemon has not finished or progressed for at least 30 minutes.', 'summary': 'DaemonSet rollout is stuck.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:52:15.416054193Z', 'value': '5e+00'}, {'labels': {'alertname': 'KubeDaemonSetRolloutStuck', 'container': 'kube-rbac-proxy-main', 'daemonset': 'node-exporter', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-monitoring', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': 'DaemonSet openshift-monitoring/node-exporter has not finished or progressed for at least 30 minutes.', 'summary': 'DaemonSet rollout is stuck.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:52:15.416054193Z', 'value': '5e+00'}, {'labels': {'alertname': 'KubeDaemonSetRolloutStuck', 'container': 'kube-rbac-proxy-main', 'daemonset': 'csi-rbdplugin', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-storage', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': 'DaemonSet openshift-storage/csi-rbdplugin has not finished or progressed for at least 30 minutes.', 'summary': 'DaemonSet rollout is stuck.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:56:15.416054193Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubeDaemonSetRolloutStuck', 'container': 'kube-rbac-proxy-main', 'daemonset': 'diskmaker-discovery', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-local-storage', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': 'DaemonSet openshift-local-storage/diskmaker-discovery has not finished or progressed for at least 30 minutes.', 'summary': 'DaemonSet rollout is stuck.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:56:15.416054193Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubeDaemonSetRolloutStuck', 'container': 'kube-rbac-proxy-main', 'daemonset': 'csi-cephfsplugin', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-storage', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': 'DaemonSet openshift-storage/csi-cephfsplugin has not finished or progressed for at least 30 minutes.', 'summary': 'DaemonSet rollout is stuck.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:56:15.416054193Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubeDaemonSetRolloutStuck', 'container': 'kube-rbac-proxy-main', 'daemonset': 'multus', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-multus', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': 'DaemonSet openshift-multus/multus has not finished or progressed for at least 30 minutes.', 'summary': 'DaemonSet rollout is stuck.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:52:15.416054193Z', 'value': '5e+00'}, {'labels': {'alertname': 'KubeDaemonSetRolloutStuck', 'container': 'kube-rbac-proxy-main', 'daemonset': 'network-check-target', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-network-diagnostics', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': 'DaemonSet openshift-network-diagnostics/network-check-target has not finished or progressed for at least 30 minutes.', 'summary': 'DaemonSet rollout is stuck.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:52:15.416054193Z', 'value': '5e+00'}, {'labels': {'alertname': 'KubeDaemonSetRolloutStuck', 'container': 'kube-rbac-proxy-main', 'daemonset': 'node-resolver', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-dns', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': 'DaemonSet openshift-dns/node-resolver has not finished or progressed for at least 30 minutes.', 'summary': 'DaemonSet rollout is stuck.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:52:15.416054193Z', 'value': '5e+00'}, {'labels': {'alertname': 'KubeDaemonSetRolloutStuck', 'container': 'kube-rbac-proxy-main', 'daemonset': 'tuned', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-cluster-node-tuning-operator', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': 'DaemonSet openshift-cluster-node-tuning-operator/tuned has not finished or progressed for at least 30 minutes.', 'summary': 'DaemonSet rollout is stuck.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:52:15.416054193Z', 'value': '5e+00'}, {'labels': {'alertname': 'KubeDaemonSetRolloutStuck', 'container': 'kube-rbac-proxy-main', 'daemonset': 'multus-additional-cni-plugins', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-multus', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': 'DaemonSet openshift-multus/multus-additional-cni-plugins has not finished or progressed for at least 30 minutes.', 'summary': 'DaemonSet rollout is stuck.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:52:15.416054193Z', 'value': '5e+00'}, {'labels': {'alertname': 'KubeDaemonSetRolloutStuck', 'container': 'kube-rbac-proxy-main', 'daemonset': 'dns-default', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-dns', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': 'DaemonSet openshift-dns/dns-default has not finished or progressed for at least 30 minutes.', 'summary': 'DaemonSet rollout is stuck.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:56:15.416054193Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubeDaemonSetRolloutStuck', 'container': 'kube-rbac-proxy-main', 'daemonset': 'node-ca', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-image-registry', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': 'DaemonSet openshift-image-registry/node-ca has not finished or progressed for at least 30 minutes.', 'summary': 'DaemonSet rollout is stuck.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:52:15.416054193Z', 'value': '5e+00'}, {'labels': {'alertname': 'KubeContainerWaiting', 'container': 'db', 'namespace': 'openshift-storage', 'pod': 'noobaa-db-pg-0', 'severity': 'warning'}, 'annotations': {'description': 'pod/noobaa-db-pg-0 in namespace openshift-storage on container db has been in waiting state for longer than 1 hour.', 'summary': 'Pod container waiting longer than 1 hour'}, 'state': 'pending', 'activeAt': '2023-02-17T04:52:15.416054193Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubeDaemonSetMisScheduled', 'container': 'kube-rbac-proxy-main', 'daemonset': 'csi-rbdplugin', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-storage', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': '1 Pods of DaemonSet openshift-storage/csi-rbdplugin are running where they are not supposed to run.', 'summary': 'DaemonSet pods are misscheduled.'}, 'state': 'firing', 'activeAt': '2023-02-17T04:52:15.416054193Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubeDaemonSetMisScheduled', 'container': 'kube-rbac-proxy-main', 'daemonset': 'diskmaker-discovery', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-local-storage', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': '1 Pods of DaemonSet openshift-local-storage/diskmaker-discovery are running where they are not supposed to run.', 'summary': 'DaemonSet pods are misscheduled.'}, 'state': 'firing', 'activeAt': '2023-02-17T04:52:15.416054193Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubeDaemonSetMisScheduled', 'container': 'kube-rbac-proxy-main', 'daemonset': 'diskmaker-manager', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-local-storage', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': '1 Pods of DaemonSet openshift-local-storage/diskmaker-manager are running where they are not supposed to run.', 'summary': 'DaemonSet pods are misscheduled.'}, 'state': 'firing', 'activeAt': '2023-02-17T04:52:15.416054193Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubeDaemonSetMisScheduled', 'container': 'kube-rbac-proxy-main', 'daemonset': 'dns-default', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-dns', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': '1 Pods of DaemonSet openshift-dns/dns-default are running where they are not supposed to run.', 'summary': 'DaemonSet pods are misscheduled.'}, 'state': 'firing', 'activeAt': '2023-02-17T04:52:15.416054193Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubeDaemonSetMisScheduled', 'container': 'kube-rbac-proxy-main', 'daemonset': 'ingress-canary', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-ingress-canary', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': '1 Pods of DaemonSet openshift-ingress-canary/ingress-canary are running where they are not supposed to run.', 'summary': 'DaemonSet pods are misscheduled.'}, 'state': 'firing', 'activeAt': '2023-02-17T04:52:15.416054193Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubeDaemonSetMisScheduled', 'container': 'kube-rbac-proxy-main', 'daemonset': 'csi-cephfsplugin', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-storage', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': '1 Pods of DaemonSet openshift-storage/csi-cephfsplugin are running where they are not supposed to run.', 'summary': 'DaemonSet pods are misscheduled.'}, 'state': 'firing', 'activeAt': '2023-02-17T04:52:15.416054193Z', 'value': '1e+00'}, {'labels': {'alertname': 'NTOPodsNotReady', 'condition': 'true', 'container': 'kube-rbac-proxy-main', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-cluster-node-tuning-operator', 'pod': 'tuned-drnfs', 'service': 'kube-state-metrics', 'severity': 'warning', 'uid': 'f1e4930d-c276-470f-89ce-ff20c4f13293'}, 'annotations': {'description': 'Pod tuned-drnfs is not ready.\nReview the "Event" objects in "openshift-cluster-node-tuning-operator" namespace for further details.\n', 'summary': 'Pod tuned-drnfs is not ready.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:52:20.867127044Z', 'value': '0e+00'}, {'labels': {'alertname': 'CephNodeDown', 'namespace': 'openshift-storage', 'node': 'lon06-worker-0.rdr-tier-test.ibm.com', 'severity': 'critical'}, 'annotations': {'description': 'Storage node lon06-worker-0.rdr-tier-test.ibm.com went down. Please check the node immediately.', 'message': 'Storage node lon06-worker-0.rdr-tier-test.ibm.com went down', 'severity_level': 'error', 'storage_type': 'ceph'}, 'state': 'firing', 'activeAt': '2023-02-17T04:52:21.469339942Z', 'value': '0e+00'}, {'labels': {'alertname': 'ClusterMonitoringOperatorReconciliationErrors', 'container': 'kube-rbac-proxy', 'endpoint': 'https', 'instance': '10.128.0.16:8443', 'job': 'cluster-monitoring-operator', 'namespace': 'openshift-monitoring', 'pod': 'cluster-monitoring-operator-6dcc659c6d-g7nh9', 'service': 'cluster-monitoring-operator', 'severity': 'warning'}, 'annotations': {'description': 'Errors are occurring during reconciliation cycles. Inspect the cluster-monitoring-operator log for potential root causes.', 'summary': 'Cluster Monitoring Operator is experiencing unexpected reconciliation errors.'}, 'state': 'pending', 'activeAt': '2023-02-17T05:07:18.867184526Z', 'value': '0e+00'}, {'labels': {'alertname': 'KubeDeploymentReplicasMismatch', 'container': 'kube-rbac-proxy-main', 'deployment': 'rook-ceph-osd-2', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-storage', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': 'Deployment openshift-storage/rook-ceph-osd-2 has not matched the expected number of replicas for longer than 15 minutes. This indicates that cluster infrastructure is unable to start or restart the necessary components. This most often occurs when one or more nodes are down or partioned from the cluster, or a fault occurs on the node that prevents the workload from starting. In rare cases this may indicate a new version of a cluster component cannot start due to a bug or configuration error. Assess the pods for this deployment to verify they are running on healthy nodes and then contact support.', 'runbook_url': 'https://github.com/openshift/runbooks/blob/master/alerts/cluster-monitoring-operator/KubeDeploymentReplicasMismatch.md', 'summary': 'Deployment has not matched the expected number of replicas'}, 'state': 'firing', 'activeAt': '2023-02-17T04:52:18.867184526Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubeDeploymentReplicasMismatch', 'container': 'kube-rbac-proxy-main', 'deployment': 'rook-ceph-mon-c', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-storage', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': 'Deployment openshift-storage/rook-ceph-mon-c has not matched the expected number of replicas for longer than 15 minutes. This indicates that cluster infrastructure is unable to start or restart the necessary components. This most often occurs when one or more nodes are down or partioned from the cluster, or a fault occurs on the node that prevents the workload from starting. In rare cases this may indicate a new version of a cluster component cannot start due to a bug or configuration error. Assess the pods for this deployment to verify they are running on healthy nodes and then contact support.', 'runbook_url': 'https://github.com/openshift/runbooks/blob/master/alerts/cluster-monitoring-operator/KubeDeploymentReplicasMismatch.md', 'summary': 'Deployment has not matched the expected number of replicas'}, 'state': 'firing', 'activeAt': '2023-02-17T04:52:18.867184526Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubeDeploymentReplicasMismatch', 'container': 'kube-rbac-proxy-main', 'deployment': 'rook-ceph-crashcollector-lon06-worker-0.rdr-tier-test.ibm.com', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-storage', 'service': 'kube-state-metrics', 'severity': 'warning'}, 'annotations': {'description': 'Deployment openshift-storage/rook-ceph-crashcollector-lon06-worker-0.rdr-tier-test.ibm.com has not matched the expected number of replicas for longer than 15 minutes. This indicates that cluster infrastructure is unable to start or restart the necessary components. This most often occurs when one or more nodes are down or partioned from the cluster, or a fault occurs on the node that prevents the workload from starting. In rare cases this may indicate a new version of a cluster component cannot start due to a bug or configuration error. Assess the pods for this deployment to verify they are running on healthy nodes and then contact support.', 'runbook_url': 'https://github.com/openshift/runbooks/blob/master/alerts/cluster-monitoring-operator/KubeDeploymentReplicasMismatch.md', 'summary': 'Deployment has not matched the expected number of replicas'}, 'state': 'firing', 'activeAt': '2023-02-17T04:52:18.867184526Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubePodNotScheduled', 'container': 'kube-rbac-proxy-main', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-storage', 'pod': 'rook-ceph-crashcollector-lon06-worker-0.rdr-tier-test.ibm.vgz5q', 'service': 'kube-state-metrics', 'severity': 'warning', 'uid': 'dade2553-c1a4-4f20-b7d0-e4157dc93668'}, 'annotations': {'description': 'Pod openshift-storage/rook-ceph-crashcollector-lon06-worker-0.rdr-tier-test.ibm.vgz5q cannot be scheduled for more than 30 minutes.\nCheck the details of the pod with the following command:\noc describe -n openshift-storage pod rook-ceph-crashcollector-lon06-worker-0.rdr-tier-test.ibm.vgz5q', 'summary': 'Pod cannot be scheduled.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:57:18.867184526Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubePodNotScheduled', 'container': 'kube-rbac-proxy-main', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-storage', 'pod': 'rook-ceph-mon-c-6df78cddcd-6zfcb', 'service': 'kube-state-metrics', 'severity': 'warning', 'uid': '98ad79d3-dc43-40ab-98af-5bcb15edc180'}, 'annotations': {'description': 'Pod openshift-storage/rook-ceph-mon-c-6df78cddcd-6zfcb cannot be scheduled for more than 30 minutes.\nCheck the details of the pod with the following command:\noc describe -n openshift-storage pod rook-ceph-mon-c-6df78cddcd-6zfcb', 'summary': 'Pod cannot be scheduled.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:57:18.867184526Z', 'value': '1e+00'}, {'labels': {'alertname': 'KubePodNotScheduled', 'container': 'kube-rbac-proxy-main', 'endpoint': 'https-main', 'job': 'kube-state-metrics', 'namespace': 'openshift-storage', 'pod': 'rook-ceph-osd-2-79d7d8dd99-42xp4', 'service': 'kube-state-metrics', 'severity': 'warning', 'uid': '79ceb646-a30d-4a64-bdd4-14f5ceb6af7d'}, 'annotations': {'description': 'Pod openshift-storage/rook-ceph-osd-2-79d7d8dd99-42xp4 cannot be scheduled for more than 30 minutes.\nCheck the details of the pod with the following command:\noc describe -n openshift-storage pod rook-ceph-osd-2-79d7d8dd99-42xp4', 'summary': 'Pod cannot be scheduled.'}, 'state': 'pending', 'activeAt': '2023-02-17T04:57:18.867184526Z', 'value': '1e+00'}] >00:21:00 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc wait --for condition=ready pod -l app=rook-ceph-tools -n openshift-storage --timeout=300s >00:21:01 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get pod -l 'app=rook-ceph-tools' -o jsonpath='{.items[0].metadata.name}' >00:21:01 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage exec rook-ceph-tools-65ffd8b86d-jfvlb -- ceph health >00:21:03 - MainThread - tests.conftest - INFO - Ceph health check failed at teardown >00:21:03 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc wait --for condition=ready pod -l app=rook-ceph-tools -n openshift-storage --timeout=300s >00:21:03 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get pod -l 'app=rook-ceph-tools' -o jsonpath='{.items[0].metadata.name}' >00:21:03 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage exec rook-ceph-tools-65ffd8b86d-jfvlb -- ceph health >00:21:04 - MainThread - ocs_ci.utility.retry - WARNING - Ceph cluster health is not OK. Health: HEALTH_WARN 1/3 mons down, quorum a,b; 1 osds down; 1 host (1 osds) down; Degraded data redundancy: 1313/8418 objects degraded (15.598%), 61 pgs degraded >, Retrying in 60 seconds... >00:22:04 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc wait --for condition=ready pod -l app=rook-ceph-tools -n openshift-storage --timeout=300s >00:22:04 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get pod -l 'app=rook-ceph-tools' -o jsonpath='{.items[0].metadata.name}' >00:22:04 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage exec rook-ceph-tools-65ffd8b86d-jfvlb -- ceph health >00:22:05 - MainThread - ocs_ci.utility.utils - INFO - Ceph cluster health is HEALTH_OK. >00:22:05 - MainThread - tests.conftest - INFO - aws_client secret not found >00:22:05 - MainThread - tests.conftest - INFO - gcp_client secret not found >00:22:05 - MainThread - tests.conftest - INFO - azure_client secret not found >00:22:05 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage delete Secret secret-cldmgr-ibm-cos-aed5ec3272d04daab7 >00:22:05 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage delete Secret secret-cldmgr-rgw-d954a5deaf974c9db0aebc >00:22:05 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage delete Route rook-ceph-rgw-ocs-storagecluster-cephobjectstore >00:22:05 - MainThread - ocs_ci.ocs.utils - INFO - Must gather image: quay.io/rhceph-dev/ocs-must-gather:latest-4.12 will be used. >00:22:05 - MainThread - ocs_ci.ocs.utils - INFO - OCS logs will be placed in location /root/logs-ocs-ci/4.12/failed_testcase_ocs_logs_1676609418/test_noobaa_sts_host_node_failure[noobaa-db-pg-True]_ocs_logs/ocs_must_gather >00:22:05 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather:latest-4.12 --dest-dir=/root/logs-ocs-ci/4.12/failed_testcase_ocs_logs_1676609418/test_noobaa_sts_host_node_failure[noobaa-db-pg-True]_ocs_logs/ocs_must_gather >00:25:40 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:25:40 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > nbcore.gz" >00:25:40 - MainThread - ocs_ci.utility.utils - WARNING - Command stderr: Defaulted container "db" out of: db, init (init), initialize-database (init) >bash: nbcore.gz: Permission denied >command terminated with exit code 1 > >00:25:40 - MainThread - ocs_ci.ocs.utils - ERROR - Failed to dump noobaa DB! Error: Error during execution of command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > nbcore.gz". >Error is Defaulted container "db" out of: db, init (init), initialize-database (init) >bash: nbcore.gz: Permission denied >command terminated with exit code 1 > >00:26:10 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:26:11 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > nbcore.gz" >00:26:11 - MainThread - ocs_ci.utility.utils - WARNING - Command stderr: Defaulted container "db" out of: db, init (init), initialize-database (init) >bash: nbcore.gz: Permission denied >command terminated with exit code 1 > >00:26:11 - MainThread - ocs_ci.ocs.utils - ERROR - Failed to dump noobaa DB! Error: Error during execution of command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > nbcore.gz". >Error is Defaulted container "db" out of: db, init (init), initialize-database (init) >bash: nbcore.gz: Permission denied >command terminated with exit code 1 > >00:26:41 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:26:41 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > nbcore.gz" >00:26:42 - MainThread - ocs_ci.utility.utils - WARNING - Command stderr: Defaulted container "db" out of: db, init (init), initialize-database (init) >bash: nbcore.gz: Permission denied >command terminated with exit code 1 > >00:26:42 - MainThread - ocs_ci.ocs.utils - ERROR - Failed to dump noobaa DB! Error: Error during execution of command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > nbcore.gz". >Error is Defaulted container "db" out of: db, init (init), initialize-database (init) >bash: nbcore.gz: Permission denied >command terminated with exit code 1 > >00:27:12 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:27:12 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > nbcore.gz" >00:27:12 - MainThread - ocs_ci.utility.utils - WARNING - Command stderr: Defaulted container "db" out of: db, init (init), initialize-database (init) >bash: nbcore.gz: Permission denied >command terminated with exit code 1 > >00:27:12 - MainThread - ocs_ci.ocs.utils - ERROR - Failed to dump noobaa DB! Error: Error during execution of command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > nbcore.gz". >Error is Defaulted container "db" out of: db, init (init), initialize-database (init) >bash: nbcore.gz: Permission denied >command terminated with exit code 1 > >00:27:42 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get Pod -n openshift-storage --selector=noobaa-db=postgres -o yaml >00:27:42 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > nbcore.gz" >00:27:43 - MainThread - ocs_ci.utility.utils - WARNING - Command stderr: Defaulted container "db" out of: db, init (init), initialize-database (init) >bash: nbcore.gz: Permission denied >command terminated with exit code 1 > >00:27:43 - MainThread - ocs_ci.ocs.utils - ERROR - Failed to dump noobaa DB! Error: Error during execution of command: oc -n openshift-storage rsh noobaa-db-pg-0 bash -c "pg_dump nbcore | gzip > nbcore.gz". >Error is Defaulted container "db" out of: db, init (init), initialize-database (init) >bash: nbcore.gz: Permission denied >command terminated with exit code 1 > > >tests/manage/mcg/test_host_node_failure.py::TestNoobaaSTSHostNodeFailure::test_noobaa_sts_host_node_failure[noobaa-db-pg-True] ERROR > >==================================== ERRORS ==================================== >_ ERROR at teardown of TestNoobaaSTSHostNodeFailure.test_noobaa_sts_host_node_failure[noobaa-db-pg-True] _ > > def finalizer(): > if not skipped: > try: > teardown = config.RUN["cli_params"]["teardown"] > skip_ocs_deployment = config.ENV_DATA["skip_ocs_deployment"] > ceph_cluster_installed = config.RUN.get("cephcluster") > if not ( > teardown > or skip_ocs_deployment > or mcg_only_deployment > or not ceph_cluster_installed > ): > if "test_add_capacity" in node.name: > ceph_health_check( > namespace=config.ENV_DATA["cluster_namespace"] > ) > log.info( > "Ceph health check passed at teardown. (After Add capacity " > "TC we allow more re-tries)" > ) > else: >> ceph_health_check_base() > >tests/conftest.py:1442: >_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > >namespace = 'openshift-storage' > > def ceph_health_check_base(namespace=None): > """ > Exec `ceph health` cmd on tools pod to determine health of cluster. > > Args: > namespace (str): Namespace of OCS > (default: config.ENV_DATA['cluster_namespace']) > > Raises: > CephHealthException: If the ceph health returned is not HEALTH_OK > CommandFailed: If the command to retrieve the tools pod name or the > command to get ceph health returns a non-zero exit code > Returns: > boolean: True if HEALTH_OK > > """ > # Import here to avoid circular loop > from ocs_ci.ocs.cluster import is_ms_consumer_cluster > from ocs_ci.ocs.managedservice import ( > patch_consumer_toolbox, > is_rados_connect_error_in_ex, > ) > > namespace = namespace or config.ENV_DATA["cluster_namespace"] > run_cmd( > f"oc wait --for condition=ready pod " > f"-l app=rook-ceph-tools " > f"-n {namespace} " > f"--timeout=300s" > ) > ceph_health_cmd = create_ceph_health_cmd(namespace) > try: > health = run_cmd(ceph_health_cmd) > except CommandFailed as ex: > if is_rados_connect_error_in_ex(ex) and is_ms_consumer_cluster(): > log.info("Patch the consumer rook-ceph-tools deployment") > patch_consumer_toolbox() > # get the new tool box pod since patching creates the new tool box pod > ceph_health_cmd = create_ceph_health_cmd(namespace) > health = run_cmd(ceph_health_cmd) > else: > raise ex > > if health.strip() == "HEALTH_OK": > log.info("Ceph cluster health is HEALTH_OK.") > return True > else: >> raise CephHealthException(f"Ceph cluster health is not OK. Health: {health}") >E ocs_ci.ocs.exceptions.CephHealthException: Ceph cluster health is not OK. Health: HEALTH_WARN 1/3 mons down, quorum a,b; 1 osds down; 1 host (1 osds) down; Degraded data redundancy: 1313/8418 objects degraded (15.598%), 61 pgs degraded > >ocs_ci/utility/utils.py:2112: CephHealthException >=================================== FAILURES =================================== >_ TestNoobaaSTSHostNodeFailure.test_noobaa_sts_host_node_failure[noobaa-db-pg-True] _ > >self = <tests.manage.mcg.test_host_node_failure.TestNoobaaSTSHostNodeFailure object at 0x7fff7e52b400> >noobaa_sts = 'noobaa-db-pg', respin_noobaa_operator = True >mcg_obj = <ocs_ci.ocs.resources.mcg.MCG object at 0x7fff7e18ab20> >bucket_factory = <function bucket_factory_fixture.<locals>._create_buckets at 0x7fff7ec448b0> >nodes = <ocs_ci.ocs.platform_nodes.IBMPowerNodes object at 0x7fff7e5ebf10> >node_restart_teardown = None > > @pytest.mark.parametrize( > argnames=["noobaa_sts", "respin_noobaa_operator"], > argvalues=[ > pytest.param( > *[constants.NOOBAA_CORE_STATEFULSET, False], > marks=pytest.mark.polarion_id("OCS-2672"), > ), > pytest.param( > *[constants.NOOBAA_DB_STATEFULSET, False], > marks=[ > pytest.mark.polarion_id("OCS-2668"), > skipif_ibm_power, > ] > ), > pytest.param( > *[constants.NOOBAA_CORE_STATEFULSET, True], > marks=pytest.mark.polarion_id("OCS-2669"), > ), > pytest.param( > *[constants.NOOBAA_DB_STATEFULSET, True], > marks=[ > pytest.mark.polarion_id("OCS-2670"), > # skipif_ibm_power, > ] > ), > ], > ) > def test_noobaa_sts_host_node_failure( > self, > noobaa_sts, > respin_noobaa_operator, > mcg_obj, > bucket_factory, > nodes, > node_restart_teardown, > ): > """ > Test case to fail node where NooBaa Statefulset pod (noobaa-core, noobaa-db) > is hosted and verify the pod is rescheduled on a healthy node > > """ > executor = ThreadPoolExecutor(max_workers=1) > pod_obj = OCP( > kind=constants.POD, namespace=constants.OPENSHIFT_STORAGE_NAMESPACE > ) > > # Get noobaa statefulset pod and node where it is hosted > noobaa_sts_pod = get_noobaa_pods(noobaa_label=self.labels_map[noobaa_sts])[0] > noobaa_sts_pod_node = get_pod_node(noobaa_sts_pod) > log.info(f"{noobaa_sts_pod.name} is running on {noobaa_sts_pod_node.name}") > > # Get the NooBaa operator pod and node where it is hosted > # Check if NooBaa operator and statefulset pod are hosted on same node > noobaa_operator_pod = get_noobaa_pods( > noobaa_label=self.labels_map[constants.NOOBAA_OPERATOR_DEPLOYMENT] > )[0] > noobaa_operator_pod_node = get_pod_node(noobaa_operator_pod) > log.info( > f"{noobaa_operator_pod.name} is running on {noobaa_operator_pod_node.name}" > ) > if noobaa_sts_pod_node.name == noobaa_operator_pod_node.name: > operator_on_same_node = True > log.info( > f"{noobaa_sts_pod.name} and {noobaa_operator_pod.name} are running on same node." > ) > else: > operator_on_same_node = False > log.info( > f"{noobaa_sts_pod.name} and {noobaa_operator_pod.name} are running on different node." > ) > > # Stop the node > log.info( > f"Stopping {noobaa_sts_pod_node.name} where {noobaa_sts_pod.name} is hosted" > ) > stop_thread = executor.submit(nodes.stop_nodes, nodes=[noobaa_sts_pod_node]) > node.wait_for_nodes_status( > node_names=[noobaa_sts_pod_node.name], status=constants.NODE_NOT_READY > ) > > # Disrupt NooBaa operator > if respin_noobaa_operator: > noobaa_operator_pod.delete(force=True) > > # Check result of 'stop_thread' > stop_thread.result() > > # Wait for NooBaa operator pod to reach terminating state if on same node > # and not respun > if operator_on_same_node and not respin_noobaa_operator: > wait_for_resource_state( > resource=noobaa_operator_pod, > state=constants.STATUS_TERMINATING, > timeout=360, > ) > > # Wait for NooBaa operator pod to reach running state > pod_obj.wait_for_resource( > condition=constants.STATUS_RUNNING, > selector=self.labels_map[constants.NOOBAA_OPERATOR_DEPLOYMENT], > resource_count=1, > ) > > # Verify NooBaa statefulset pod reschedules on another node > try: > for pod_list in TimeoutSampler( > 60, > 3, > get_noobaa_pods, > noobaa_label=self.labels_map[noobaa_sts], > ): > if len(pod_list) == 1: > pod_node = get_pod_node(pod_list[0]) > if pod_node.name != noobaa_sts_pod_node.name: > log.info( > f"{pod_list[0].name} has been rescheduled on {pod_node.name}" > ) > break > log.info(f"Waiting for {noobaa_sts_pod.name} pod to be rescheduled") > except TimeoutExpiredError: > raise TimeoutExpiredError( > f"{noobaa_sts_pod.name} pod not rescheduled within 60 seconds" > ) > > # Wait for rescheduled pod to reach Running state. > # For noobaa-db pod which is attached to a PV it may take more time (~8 minutes) > # until the new pod can attach to the PV >> pod_obj.wait_for_resource( > condition=constants.STATUS_RUNNING, > selector=self.labels_map[noobaa_sts], > resource_count=1, > timeout=800 if noobaa_sts == constants.NOOBAA_DB_STATEFULSET else 60, > sleep=30 if noobaa_sts == constants.NOOBAA_DB_STATEFULSET else 3, > ) > >tests/manage/mcg/test_host_node_failure.py:187: >_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ >ocs_ci/ocs/ocp.py:729: in wait_for_resource > raise (ex) >ocs_ci/ocs/ocp.py:619: in wait_for_resource > for sample in TimeoutSampler( >_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > >self = <ocs_ci.utility.utils.TimeoutSampler object at 0x7fff7f0dfa60> > > def __iter__(self): > if self.start_time is None: > self.start_time = time.time() > while True: > self.last_sample_time = time.time() > if self.timeout <= (self.last_sample_time - self.start_time): >> raise self.timeout_exc_cls(*self.timeout_exc_args) >E ocs_ci.ocs.exceptions.TimeoutExpiredError: Timed out after 800s running get("", True, "noobaa-db=postgres") > >ocs_ci/utility/utils.py:1174: TimeoutExpiredError >=============================== warnings summary =============================== >tests/manage/mcg/test_host_node_failure.py::TestNoobaaSTSHostNodeFailure::test_noobaa_sts_host_node_failure[noobaa-db-pg-True] >tests/manage/mcg/test_host_node_failure.py::TestNoobaaSTSHostNodeFailure::test_noobaa_sts_host_node_failure[noobaa-db-pg-True] > /root/venv/lib64/python3.8/site-packages/urllib3/connectionpool.py:981: InsecureRequestWarning: Unverified HTTPS request is being made to host 'prometheus-k8s-openshift-monitoring.apps.rdr-tier-test.ibm.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings > warnings.warn( > >-- Docs: https://docs.pytest.org/en/stable/warnings.html >=========================== short test summary info ============================ >FAILED tests/manage/mcg/test_host_node_failure.py::TestNoobaaSTSHostNodeFailure::test_noobaa_sts_host_node_failure[noobaa-db-pg-True] >ERROR tests/manage/mcg/test_host_node_failure.py::TestNoobaaSTSHostNodeFailure::test_noobaa_sts_host_node_failure[noobaa-db-pg-True] >============= 1 failed, 2 warnings, 1 error in 2263.79s (0:37:43) ==============
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 2159757
: 1944664