Description of problem (please be detailed as possible and provide log snippests): We are running a zvm cluster with 3 master and 3 worker nodes. 4 additional disks are attached for OCS in my case. I run tier4c tests with ocs-ci after a fresh setup. Afterwards, most pods in the openshift-storage namespace are in Pending or Terminating status. That should not happen at customers. # oc get all -n openshift-storage NAME READY STATUS RESTARTS AGE pod/csi-cephfsplugin-dksjr 3/3 Running 0 16h pod/csi-cephfsplugin-m4l96 3/3 Running 0 2d3h pod/csi-cephfsplugin-provisioner-67bf8c765-96fv8 6/6 Terminating 0 16h pod/csi-cephfsplugin-provisioner-67bf8c765-b766j 0/6 Pending 0 13h pod/csi-cephfsplugin-provisioner-67bf8c765-hgtqh 6/6 Terminating 0 20h pod/csi-cephfsplugin-provisioner-67bf8c765-rpj4p 0/6 Pending 0 13h pod/csi-cephfsplugin-xvj8v 3/3 Running 0 21h pod/csi-rbdplugin-bjvk9 3/3 Running 0 2d3h pod/csi-rbdplugin-fmwjq 3/3 Running 0 2d3h pod/csi-rbdplugin-hxz8g 3/3 Running 0 16h pod/csi-rbdplugin-provisioner-6fb4f646bd-5wlqb 0/6 Pending 0 13h pod/csi-rbdplugin-provisioner-6fb4f646bd-ckxr5 6/6 Terminating 0 15h pod/csi-rbdplugin-provisioner-6fb4f646bd-dtstl 0/6 Pending 0 13h pod/csi-rbdplugin-provisioner-6fb4f646bd-v6mpc 6/6 Terminating 0 20h pod/noobaa-core-0 1/1 Terminating 0 2d3h pod/noobaa-db-pg-0 1/1 Terminating 0 2d3h pod/noobaa-endpoint-d9f855d8f-jmtp4 0/1 Terminating 0 13h pod/noobaa-endpoint-d9f855d8f-qbcnj 1/1 Terminating 0 2d3h pod/noobaa-endpoint-d9f855d8f-ztvj6 0/1 Pending 0 13h pod/noobaa-operator-6769cf66-7g7k5 0/1 Pending 0 13h pod/noobaa-operator-6769cf66-j99fg 1/1 Terminating 0 2d3h pod/ocs-metrics-exporter-6fc5566fcf-c46mw 0/1 Pending 0 13h pod/ocs-metrics-exporter-6fc5566fcf-gnzs9 0/1 Terminating 0 13h pod/ocs-metrics-exporter-6fc5566fcf-h8dn9 1/1 Terminating 0 2d3h pod/ocs-operator-597df98bbd-ldcmn 0/1 Terminating 1 2d3h pod/ocs-operator-597df98bbd-t5lzf 0/1 Pending 0 13h pod/rook-ceph-crashcollector-worker-001.m1307001ocs.lnxne.boe-nkfgh 0/1 Pending 0 13h pod/rook-ceph-crashcollector-worker-001.m1307001ocs.lnxne.boe-r96vq 1/1 Terminating 0 2d3h pod/rook-ceph-crashcollector-worker-002.m1307001ocs.lnxne.boe-fx27p 1/1 Terminating 0 2d3h pod/rook-ceph-crashcollector-worker-002.m1307001ocs.lnxne.boe-qj8zn 0/1 Pending 0 13h pod/rook-ceph-crashcollector-worker-003.m1307001ocs.lnxne.boe-p68q5 1/1 Terminating 0 2d3h pod/rook-ceph-crashcollector-worker-003.m1307001ocs.lnxne.boe-q7t2j 0/1 Pending 0 13h pod/rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-6dd5854crwrdj 0/2 Pending 0 13h pod/rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-6dd5854cwbzjc 2/2 Terminating 0 17h pod/rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-84c58747hjc9r 0/2 Pending 0 13h pod/rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-84c58747n8ppz 2/2 Terminating 0 2d3h pod/rook-ceph-mgr-a-7cb5dccddd-dhfs6 0/2 Pending 0 13h pod/rook-ceph-mgr-a-7cb5dccddd-qnl66 0/2 Terminating 0 13h pod/rook-ceph-mgr-a-7cb5dccddd-s7d68 2/2 Terminating 1 14h pod/rook-ceph-mon-a-6bc97557b9-k79cx 2/2 Terminating 0 14h pod/rook-ceph-mon-a-6bc97557b9-vplhp 0/2 Pending 0 13h pod/rook-ceph-mon-b-7d7bb8cd85-7xmlq 2/2 Terminating 0 2d3h pod/rook-ceph-mon-b-7d7bb8cd85-c7nwp 0/2 Pending 0 13h pod/rook-ceph-mon-c-85f6544d5-4npbb 0/2 Pending 0 13h pod/rook-ceph-mon-c-85f6544d5-zpp9z 2/2 Terminating 0 2d3h pod/rook-ceph-operator-6dbd8cc4f5-dtqjn 0/1 Pending 0 13h pod/rook-ceph-operator-6dbd8cc4f5-sbdmw 1/1 Terminating 0 15h pod/rook-ceph-osd-0-6f779b8d65-7lxn2 0/2 Pending 0 13h pod/rook-ceph-osd-0-6f779b8d65-pt4qx 2/2 Terminating 0 14h pod/rook-ceph-osd-1-7c46b87cb5-88wtj 0/2 Pending 0 13h pod/rook-ceph-osd-1-7c46b87cb5-fvjp7 2/2 Terminating 0 2d3h pod/rook-ceph-osd-2-687dcd65f7-b6vqk 2/2 Terminating 0 2d3h pod/rook-ceph-osd-2-687dcd65f7-s829l 0/2 Pending 0 13h pod/rook-ceph-osd-3-fddd688d8-dp8q5 2/2 Terminating 0 40h pod/rook-ceph-osd-3-fddd688d8-ttnwv 0/2 Pending 0 13h pod/rook-ceph-osd-4-84d6c9797b-g9qw9 2/2 Terminating 0 40h pod/rook-ceph-osd-4-84d6c9797b-qgk8s 0/2 Pending 0 13h pod/rook-ceph-osd-5-65bdfc8b6c-mt78j 0/2 Pending 0 13h pod/rook-ceph-osd-5-65bdfc8b6c-zw6j6 2/2 Terminating 1 40h pod/rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-5f44fb95xdk6 2/2 Terminating 0 2d3h pod/rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-5f44fb9d8hdw 0/2 Terminating 0 13h pod/rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-5f44fb9vgz2h 0/2 Pending 0 13h pod/rook-ceph-tools-67c56d87cb-9f4kg 0/1 Terminating 0 13h pod/rook-ceph-tools-67c56d87cb-ckl94 0/1 Pending 0 13h pod/rook-ceph-tools-67c56d87cb-wfpsw 1/1 Terminating 1 2d1h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/csi-cephfsplugin-metrics ClusterIP 172.30.34.39 <none> 8080/TCP,8081/TCP 2d3h service/csi-rbdplugin-metrics ClusterIP 172.30.253.141 <none> 8080/TCP,8081/TCP 2d3h service/noobaa-db-pg ClusterIP 172.30.189.98 <none> 5432/TCP 2d3h service/noobaa-mgmt LoadBalancer 172.30.14.143 <pending> 80:31325/TCP,443:30468/TCP,8445:30626/TCP,8446:31932/TCP 2d3h service/ocs-metrics-exporter ClusterIP 172.30.66.162 <none> 8080/TCP,8081/TCP 2d3h service/rook-ceph-mgr ClusterIP 172.30.146.183 <none> 9283/TCP 2d3h service/rook-ceph-mon-a ClusterIP 172.30.205.222 <none> 6789/TCP,3300/TCP 2d3h service/rook-ceph-mon-b ClusterIP 172.30.146.254 <none> 6789/TCP,3300/TCP 2d3h service/rook-ceph-mon-c ClusterIP 172.30.165.245 <none> 6789/TCP,3300/TCP 2d3h service/rook-ceph-rgw-ocs-storagecluster-cephobjectstore ClusterIP 172.30.67.169 <none> 80/TCP 2d3h service/s3 LoadBalancer 172.30.138.213 <pending> 80:30202/TCP,443:30495/TCP,8444:31494/TCP,7004:30809/TCP 2d3h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/csi-cephfsplugin 0 0 0 0 0 <none> 2d3h daemonset.apps/csi-rbdplugin 0 0 0 0 0 <none> 2d3h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/csi-cephfsplugin-provisioner 0/2 2 0 2d3h deployment.apps/csi-rbdplugin-provisioner 0/2 2 0 2d3h deployment.apps/noobaa-endpoint 0/1 1 0 2d3h deployment.apps/noobaa-operator 0/1 1 0 2d3h deployment.apps/ocs-metrics-exporter 0/1 1 0 2d3h deployment.apps/ocs-operator 0/1 1 0 2d3h deployment.apps/rook-ceph-crashcollector-worker-001.m1307001ocs.lnxne.boe 0/1 1 0 2d3h deployment.apps/rook-ceph-crashcollector-worker-002.m1307001ocs.lnxne.boe 0/1 1 0 2d3h deployment.apps/rook-ceph-crashcollector-worker-003.m1307001ocs.lnxne.boe 0/1 1 0 2d3h deployment.apps/rook-ceph-mds-ocs-storagecluster-cephfilesystem-a 0/1 1 0 2d3h deployment.apps/rook-ceph-mds-ocs-storagecluster-cephfilesystem-b 0/1 1 0 2d3h deployment.apps/rook-ceph-mgr-a 0/1 1 0 2d3h deployment.apps/rook-ceph-mon-a 0/1 1 0 2d3h deployment.apps/rook-ceph-mon-b 0/1 1 0 2d3h deployment.apps/rook-ceph-mon-c 0/1 1 0 2d3h deployment.apps/rook-ceph-operator 0/1 1 0 2d3h deployment.apps/rook-ceph-osd-0 0/1 1 0 2d3h deployment.apps/rook-ceph-osd-1 0/1 1 0 2d3h deployment.apps/rook-ceph-osd-2 0/1 1 0 2d3h deployment.apps/rook-ceph-osd-3 0/1 1 0 40h deployment.apps/rook-ceph-osd-4 0/1 1 0 40h deployment.apps/rook-ceph-osd-5 0/1 1 0 40h deployment.apps/rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a 0/1 1 0 2d3h deployment.apps/rook-ceph-tools 0/1 1 0 2d1h NAME DESIRED CURRENT READY AGE replicaset.apps/csi-cephfsplugin-provisioner-67bf8c765 2 2 0 2d3h replicaset.apps/csi-rbdplugin-provisioner-6fb4f646bd 2 2 0 2d3h replicaset.apps/noobaa-endpoint-d9f855d8f 1 1 0 2d3h replicaset.apps/noobaa-operator-6769cf66 1 1 0 2d3h replicaset.apps/ocs-metrics-exporter-6fc5566fcf 1 1 0 2d3h replicaset.apps/ocs-operator-597df98bbd 1 1 0 2d3h replicaset.apps/rook-ceph-crashcollector-worker-001.m1307001ocs.lnxne.boe-687469b797 1 1 0 2d3h replicaset.apps/rook-ceph-crashcollector-worker-002.m1307001ocs.lnxne.boe-c4dc9d794 1 1 0 2d3h replicaset.apps/rook-ceph-crashcollector-worker-003.m1307001ocs.lnxne.boe-86f4945dc 1 1 0 2d3h replicaset.apps/rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-6dd5854c75 1 1 0 2d3h replicaset.apps/rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-84c5874766 1 1 0 2d3h replicaset.apps/rook-ceph-mgr-a-7cb5dccddd 1 1 0 2d3h replicaset.apps/rook-ceph-mon-a-6bc97557b9 1 1 0 2d3h replicaset.apps/rook-ceph-mon-b-7d7bb8cd85 1 1 0 2d3h replicaset.apps/rook-ceph-mon-c-85f6544d5 1 1 0 2d3h replicaset.apps/rook-ceph-operator-6dbd8cc4f5 1 1 0 2d3h replicaset.apps/rook-ceph-osd-0-6f779b8d65 1 1 0 2d3h replicaset.apps/rook-ceph-osd-1-7c46b87cb5 1 1 0 2d3h replicaset.apps/rook-ceph-osd-2-687dcd65f7 1 1 0 2d3h replicaset.apps/rook-ceph-osd-3-fddd688d8 1 1 0 40h replicaset.apps/rook-ceph-osd-4-84d6c9797b 1 1 0 40h replicaset.apps/rook-ceph-osd-5-65bdfc8b6c 1 1 0 40h replicaset.apps/rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-5f44fb9cc7 1 1 0 2d3h replicaset.apps/rook-ceph-tools-67c56d87cb 1 1 0 2d1h replicaset.apps/rook-ceph-tools-d5dc67475 0 0 0 2d1h NAME READY AGE statefulset.apps/noobaa-core 0/1 2d3h statefulset.apps/noobaa-db-pg 0/1 2d3h NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE horizontalpodautoscaler.autoscaling/noobaa-endpoint Deployment/noobaa-endpoint 0%/80% 1 2 1 2d3h NAME COMPLETIONS DURATION AGE job.batch/rook-ceph-osd-prepare-ocs-deviceset-0-data-0f82zb 1/1 16s 2d3h job.batch/rook-ceph-osd-prepare-ocs-deviceset-0-data-1bbf6r 1/1 19s 40h job.batch/rook-ceph-osd-prepare-ocs-deviceset-1-data-0rz2cn 1/1 17s 2d3h job.batch/rook-ceph-osd-prepare-ocs-deviceset-1-data-1m7cq9 1/1 20s 40h job.batch/rook-ceph-osd-prepare-ocs-deviceset-2-data-0bspsp 1/1 18s 2d3h job.batch/rook-ceph-osd-prepare-ocs-deviceset-2-data-16sxpp 1/1 19s 40h NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD route.route.openshift.io/noobaa-mgmt noobaa-mgmt-openshift-storage.apps.m1307001ocs.lnxne.boe noobaa-mgmt mgmt-https reencrypt/Redirect None route.route.openshift.io/ocs-storagecluster-cephobjectstore ocs-storagecluster-cephobjectstore-openshift-storage.apps.m1307001ocs.lnxne.boe rook-ceph-rgw-ocs-storagecluster-cephobjectstore <all> None route.route.openshift.io/rook-ceph-rgw-ocs-storagecluster-cephobjectstore rgw.apps.m1307001ocs.lnxne.boe rook-ceph-rgw-ocs-storagecluster-cephobjectstore http None route.route.openshift.io/s3 s3-openshift-storage.apps.m1307001ocs.lnxne.boe s3 s3-https reencrypt None # oc get pods -n local-storage NAME READY STATUS RESTARTS AGE local-disks-local-diskmaker-2wm2p 1/1 Running 0 2d3h local-disks-local-diskmaker-7ttg5 1/1 Running 0 2d3h local-disks-local-diskmaker-bdq8v 1/1 Running 0 2d3h local-disks-local-provisioner-2lfft 1/1 Running 0 2d3h local-disks-local-provisioner-4tsdg 1/1 Running 0 2d3h local-disks-local-provisioner-b8qkw 1/1 Running 0 2d3h local-storage-operator-57f855cb5c-6kwcc 1/1 Terminating 0 2d3h local-storage-operator-57f855cb5c-qsk89 0/1 Pending 0 13h Version of all relevant components (if applicable): Client Version: 4.7.1 Server Version: 4.7.1 Kubernetes Version: v1.20.0+5fbfd19 OCS Operator: ocs-operator.v4.7.0-299.ci Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Is there any workaround available to the best of your knowledge? Fresh installation Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 3 Can this issue reproducible? Yes Can this issue reproduce from the UI? No, with ocs-ci If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. Install OCP on zvm 2. Install LSO on zvm 3. Install OCS on zvm 4. Run ocs-ci with tier4c Actual results: Most pods in the openshift-storage namespace are in Pending or Terminating status Expected results: The cluster should work after tests with ocs-ci without problems. Additional info:
All worker nodes are in NotReady status after the test, too: # oc get nodes NAME STATUS ROLES AGE VERSION master-01.m1307001ocs.lnxne.boe Ready master 3d2h v1.20.0+5fbfd19 master-02.m1307001ocs.lnxne.boe Ready master 3d2h v1.20.0+5fbfd19 master-03.m1307001ocs.lnxne.boe Ready master 3d2h v1.20.0+5fbfd19 worker-001.m1307001ocs.lnxne.boe NotReady worker 3d1h v1.20.0+5fbfd19 worker-002.m1307001ocs.lnxne.boe NotReady worker 3d1h v1.20.0+5fbfd19 worker-003.m1307001ocs.lnxne.boe NotReady worker 3d1h v1.20.0+5fbfd19
I am waiting for must-gather results with this command: # oc adm must-gather --image=registry.redhat.io/ocs4/ocs-must-gather-rhel8:latest --dest-dir=/root/ocp4-workdir/
General logs of tier4c test: https://ibm.box.com/s/58ti5357dp2l9k5sohxge1k3jszl9xka log output file of tier4c: https://ibm.box.com/s/v6d8p5idg8efh7a3gl0g1p2wv1fw95rk
Created attachment 1764679 [details] must-gather-ocs tier4c
what are tier4c tests? Pods are pending/terminating because all works nodes are Not Ready. Any idea what caused that?
(In reply to Santosh Pillai from comment #7) > what are tier4c tests? > Pods are pending/terminating because all works nodes are Not Ready. Any idea > what caused that? adding needInfo.
ocs-ci provides multiple tier tests for OCS (see https://ocs-ci.readthedocs.io/en/latest/ ). You can test the functionality of OCP and OCS with that. One test suite is tier4c. I have used following command for running them: #run-ci -m "tier4c" --ocsci-conf ~/ocs-ci/config.yaml --cluster-path /root/ocp4-workdir/ tests --no-print-logs --capture=no --html ~/testtier4c_18thMarch2021.html --self-contained-html | tee ~/tier4c_18thMarch2021.log
The crazy thing is that the html log looks like most would be working fine. But at the end "worker nodes" are not ready and pods are not working. That should not happen with default tests in customer environments...
(In reply to Sarah Julia Kriesch from comment #6) > Created attachment 1764679 [details] > must-gather-ocs tier4c No logs are available for OCS/Rook operator in the must gather file attached. Could be because the pods are not available anymore. OCS CI logs should be investigated as to what went wrong. Adding need info for Prateek.
from comment #9 it looks like "--collect-logs" was not passed to run-ci cmd and that's the reason must-gather logs were not collected with the test case failed I would suggest rerunning the run-ci test with "--collect-logs" eg:- run-ci -m "tier4c" --ocsci-conf ~/ocs-ci/config.yaml --cluster-path /root/ocp4-workdir/ tests --no-print-logs --capture=no --html ~/testtier4c_18thMarch2021.html --self-contained-html --collect-logs
(In reply to Pratik Surve from comment #12) > from comment #9 it looks like "--collect-logs" was not passed to run-ci cmd > and that's the reason must-gather logs were not collected with the test case > failed > > I would suggest rerunning the run-ci test with "--collect-logs" > > eg:- run-ci -m "tier4c" --ocsci-conf ~/ocs-ci/config.yaml --cluster-path > /root/ocp4-workdir/ tests --no-print-logs --capture=no --html > ~/testtier4c_18thMarch2021.html --self-contained-html --collect-logs Please see this comment. Initial analysis suggests the OCS CI tests might have caused issue with the nodes. QE needs must gather logs to debug further.
Please reopen with the required logs.
Hi @muagarwa, I have reproduced the issue again with ocs 4.7. Collected logs as mentioned in the previous comment, please find same in google drive: https://drive.google.com/file/d/1x8MeBg24_AJeUP8ZxGOy9aqoQ4OnkqPF/view?usp=sharing Not sure whether this is related to https://bugzilla.redhat.com/show_bug.cgi?id=1945016 This is the current status of nodes and ocs pods: [root@m1301015 ~]# oc get nodes NAME STATUS ROLES AGE VERSION master-0.m1301015ocs.lnxne.boe Ready master 17h v1.20.0+551f7b2 master-1.m1301015ocs.lnxne.boe Ready master 17h v1.20.0+551f7b2 master-2.m1301015ocs.lnxne.boe Ready master 17h v1.20.0+551f7b2 worker-0.m1301015ocs.lnxne.boe Ready worker 17h v1.20.0+551f7b2 worker-1.m1301015ocs.lnxne.boe NotReady worker 17h v1.20.0+551f7b2 worker-2.m1301015ocs.lnxne.boe Ready worker 17h v1.20.0+551f7b2 [root@m1301015 ~]# [root@m1301015 ~]# oc -n openshift-storage get po NAME READY STATUS RESTARTS AGE csi-cephfsplugin-4mnz7 3/3 Running 0 4h53m csi-cephfsplugin-g8g42 3/3 Running 0 17h csi-cephfsplugin-provisioner-f975d886c-98b6c 6/6 Running 0 8h csi-cephfsplugin-provisioner-f975d886c-b4q7k 6/6 Running 0 4h2m csi-cephfsplugin-q4w7t 3/3 Running 0 9h csi-rbdplugin-ppcj7 3/3 Running 0 9h csi-rbdplugin-provisioner-6bbf798bfb-w85fg 6/6 Running 0 4h2m csi-rbdplugin-provisioner-6bbf798bfb-xsk6f 6/6 Running 0 9h csi-rbdplugin-scgbt 3/3 Running 0 4h40m csi-rbdplugin-tg658 3/3 Running 0 17h noobaa-core-0 1/1 Running 0 17h noobaa-db-pg-0 1/1 Running 0 17h noobaa-endpoint-7dcccc557b-kn8ph 1/1 Running 0 17h noobaa-operator-99b9845d5-67h84 1/1 Running 0 17h ocs-metrics-exporter-555554fd7b-b9kff 1/1 Running 0 4h2m ocs-metrics-exporter-555554fd7b-hktzz 1/1 Terminating 0 17h ocs-operator-6798f49bc6-5m68d 0/1 Terminating 0 17h ocs-operator-6798f49bc6-bqvqn 1/1 Running 0 4h2m rook-ceph-crashcollector-worker-0.m1301015ocs.lnxne.boe-6d6lxj9 1/1 Running 0 17h rook-ceph-crashcollector-worker-2.m1301015ocs.lnxne.boe-55ndz9t 1/1 Running 0 17h rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-65ffbc9c6q4qb 2/2 Running 0 4h2m rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-c787d8c9wm8vk 2/2 Running 0 17h rook-ceph-mgr-a-6b7955f858-kn2gh 2/2 Running 0 6h6m rook-ceph-mon-b-687f4bcf98-nwr6h 2/2 Running 0 17h rook-ceph-mon-c-5f55f54bd9-fkjwn 2/2 Running 0 17h rook-ceph-mon-d-canary-7997c4b4bf-rfkxt 0/2 Pending 0 55s rook-ceph-operator-56698787c-462tz 1/1 Running 0 7h22m rook-ceph-osd-0-b86d6d78c-btnzd 2/2 Running 0 5h29m rook-ceph-osd-1-9499578cf-bqnnn 2/2 Running 0 17h rook-ceph-osd-2-7cb485c5f-t4q7g 0/2 Pending 0 4h2m rook-ceph-osd-prepare-ocs-deviceset-1-data-0h7bd2-lnwks 0/1 Completed 0 17h rook-ceph-osd-prepare-ocs-deviceset-2-data-0l6cmn-mj6hp 0/1 Completed 0 17h rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-586cc8555dtb 2/2 Running 0 17h rook-ceph-tools-69c5449589-t6gxp
any idea what's causing the nodes to go down when running the ocs ci tests?
Actually it looks like test "tests/manage/pv_services/test_daemon_kill_during_pvc_pod_creation_and_io.py::TestDaemonKillDuringCreationOperations::test_daemon_kill_during_pvc_pod_creation_and_io[CephFileSystem-mgr]" is bringing cluster to unhealthy state, which is the similar case with https://bugzilla.redhat.com/show_bug.cgi?id=1945016
*** This bug has been marked as a duplicate of bug 1945016 ***