Bug 1893833
| Summary: | [cephadm] 5.0 - ISCSI services are not up after deploying via orch cli commands | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Preethi <pnataraj> | ||||
| Component: | Cephadm | Assignee: | Adam King <adking> | ||||
| Status: | CLOSED ERRATA | QA Contact: | Vasishta <vashastr> | ||||
| Severity: | urgent | Docs Contact: | Karen Norteman <knortema> | ||||
| Priority: | high | ||||||
| Version: | 5.0 | CC: | gsitlani, jolmomar, kdreyer, sewagner, vereddy | ||||
| Target Milestone: | --- | Keywords: | Regression | ||||
| Target Release: | 5.0 | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | ceph-16.1.0-486.el8cp | Doc Type: | No Doc Update | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2021-08-30 08:26:54 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
Can you try deploying with these set of commands instead: ceph osd pool create <poolname> ceph osd pool application enable <poolname> rbd ceph orch apply iscsi <poolname> admin admin --placement=<host(s) where you want iscsi> I've been able to deploy in my local clusters (on VMs) with these commands. I think the set of commands you have here might be failing because it doesn't enable the rbd application on the pool. @Juan, was able to deploy the ISCSI again. however, i feel behaviour is inconsistent. ISCSI services are still not up after deployment says successful. Please refer the below output. First time, ceph orch ls, displayed ISCSI services not up and later services displayed as up and running and later again not up in ceph orch ls command. [ceph: root@magna094 /]# ceph osd pool application enable iscsipool rbd enabled application 'rbd' on pool 'iscsipool' [ceph: root@magna094 /]# ceph orch apply iscsi <poolname> admin admin --placement="1 magna094" bash: poolname: No such file or directory [ceph: root@magna094 /]# ceph orch apply iscsi iscsipool admin admin --placement="1 magna094" Scheduled iscsi.iscsi update... [ceph: root@magna094 /]# ceph orch ls NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID alertmanager 1/1 2s ago 10w count:1 docker.io/prom/alertmanager:v0.20.0 0881eb8f169f crash 9/9 4s ago 10w * mix dd0a3c51082c grafana 1/1 2s ago 10w count:1 docker.io/ceph/ceph-grafana:6.6.2 a0dce381714a iscsi.iscsi 0/1 - - magna094;count:1 <unknown> <unknown> mds.test 3/3 4s ago 58m count:3 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c mgr 2/2 3s ago 10w count:2 mix dd0a3c51082c mon 3/3 4s ago 9w magna094;magna067;magna073;count:3 mix dd0a3c51082c nfs.foo 1/1 3s ago 6m count:1 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c node-exporter 9/9 4s ago 10w * docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf osd.None 7/0 4s ago - <unmanaged> mix dd0a3c51082c osd.all-available-devices 16/20 4s ago 3w <unmanaged> registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c osd.dashboard-admin-1605876982239 4/4 4s ago 3w * registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c prometheus 1/1 2s ago 10w count:1 docker.io/prom/prometheus:v2.18.1 de242295e225 rgw.myorg.us-east-1 2/2 4s ago 7w magna092;magna093;count:2 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c rgw.test_realm.test_zone 0/2 - - count:2 <unknown> <unknown> [ceph: root@magna094 /]# ceph orch ls NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID alertmanager 1/1 12s ago 10w count:1 docker.io/prom/alertmanager:v0.20.0 0881eb8f169f crash 9/9 14s ago 10w * mix dd0a3c51082c grafana 1/1 12s ago 10w count:1 docker.io/ceph/ceph-grafana:6.6.2 a0dce381714a iscsi.iscsi 1/1 - 12s magna094;count:1 <unknown> <unknown> mds.test 3/3 14s ago 59m count:3 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c mgr 2/2 13s ago 10w count:2 mix dd0a3c51082c mon 3/3 14s ago 9w magna094;magna067;magna073;count:3 mix dd0a3c51082c nfs.foo 1/1 13s ago 6m count:1 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c node-exporter 9/9 14s ago 10w * docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf osd.None 7/0 14s ago - <unmanaged> mix dd0a3c51082c osd.all-available-devices 16/20 14s ago 3w <unmanaged> registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c osd.dashboard-admin-1605876982239 4/4 14s ago 3w * registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c prometheus 1/1 12s ago 10w count:1 docker.io/prom/prometheus:v2.18.1 de242295e225 rgw.myorg.us-east-1 2/2 14s ago 7w magna092;magna093;count:2 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c rgw.test_realm.test_zone 0/2 - - count:2 <unknown> <unknown> [ceph: root@magna094 /]# ceph orch ls NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID alertmanager 1/1 23s ago 10w count:1 docker.io/prom/alertmanager:v0.20.0 0881eb8f169f crash 9/9 25s ago 10w * mix dd0a3c51082c grafana 1/1 23s ago 10w count:1 docker.io/ceph/ceph-grafana:6.6.2 a0dce381714a iscsi.iscsi 1/1 - 22s magna094;count:1 <unknown> <unknown> mds.test 3/3 25s ago 59m count:3 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c mgr 2/2 24s ago 10w count:2 mix dd0a3c51082c mon 3/3 24s ago 9w magna094;magna067;magna073;count:3 mix dd0a3c51082c nfs.foo 1/1 24s ago 7m count:1 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c node-exporter 9/9 25s ago 10w * docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf osd.None 7/0 25s ago - <unmanaged> mix dd0a3c51082c osd.all-available-devices 16/20 25s ago 3w <unmanaged> registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c osd.dashboard-admin-1605876982239 4/4 24s ago 3w * registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c prometheus 1/1 23s ago 10w count:1 docker.io/prom/prometheus:v2.18.1 de242295e225 rgw.myorg.us-east-1 2/2 25s ago 7w magna092;magna093;count:2 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c rgw.test_realm.test_zone 0/2 - - count:2 <unknown> <unknown> [ceph: root@magna094 /]# ceph orch ls NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID alertmanager 1/1 25s ago 10w count:1 docker.io/prom/alertmanager:v0.20.0 0881eb8f169f crash 9/9 54s ago 10w * mix dd0a3c51082c grafana 1/1 25s ago 10w count:1 docker.io/ceph/ceph-grafana:6.6.2 a0dce381714a iscsi.iscsi 0/1 25s ago 51s magna094;count:1 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 <unknown> mds.test 3/3 54s ago 59m count:3 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c mgr 2/2 53s ago 10w count:2 mix dd0a3c51082c mon 3/3 54s ago 9w magna094;magna067;magna073;count:3 mix dd0a3c51082c nfs.foo 1/1 53s ago 7m count:1 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c node-exporter 9/9 54s ago 10w * docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf osd.None 7/0 54s ago - <unmanaged> mix dd0a3c51082c osd.all-available-devices 16/20 54s ago 3w <unmanaged> registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c osd.dashboard-admin-1605876982239 4/4 54s ago 3w * registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c prometheus 1/1 25s ago 10w count:1 docker.io/prom/prometheus:v2.18.1 de242295e225 rgw.myorg.us-east-1 2/2 54s ago 7w magna092;magna093;count:2 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c rgw.test_realm.test_zone 0/2 - - count:2 <unknown> <unknown> [ceph: root@magna094 /]# ceph orch ls NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID alertmanager 1/1 42s ago 10w count:1 docker.io/prom/alertmanager:v0.20.0 0881eb8f169f crash 9/9 71s ago 10w * mix dd0a3c51082c grafana 1/1 42s ago 10w count:1 docker.io/ceph/ceph-grafana:6.6.2 a0dce381714a iscsi.iscsi 0/1 42s ago 69s magna094;count:1 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 <unknown> mds.test 3/3 71s ago 59m count:3 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c mgr 2/2 70s ago 10w count:2 mix dd0a3c51082c mon 3/3 71s ago 9w magna094;magna067;magna073;count:3 mix dd0a3c51082c nfs.foo 1/1 70s ago 7m count:1 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c node-exporter 9/9 71s ago 10w * docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf osd.None 7/0 71s ago - <unmanaged> mix dd0a3c51082c osd.all-available-devices 16/20 71s ago 3w <unmanaged> registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c osd.dashboard-admin-1605876982239 4/4 71s ago 3w * registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c prometheus 1/1 42s ago 10w count:1 docker.io/prom/prometheus:v2.18.1 de242295e225 rgw.myorg.us-east-1 2/2 71s ago 7w magna092;magna093;count:2 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-93648-20201117204824 dd0a3c51082c rgw.test_realm.test_zone 0/2 - - count:2 <unknown> <unknown> [ceph: root@magna094 /]# Cannot reproduce. Next time you see iscsi service (or any other) with problems use:
[ceph: root@magna094 /]# ceph orch ls iscsi --format yaml
service_type: iscsi
service_id: iscsi
service_name: iscsi.iscsi
placement:
count: 1
hosts:
- magna094
spec:
api_password: admin
api_user: admin
pool: iscsipool
status:
container_image_id: dd0a3c51082c3c1aba999a6711cc79d31dc6b109a6f4a93a5735ea706f03334d
container_image_name: registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:4b985089d14513ccab29c42e1531bfcb2e98a614c497726153800d72a2ac11f0
created: '2020-12-17T18:45:11.269491'
last_refresh: '2021-02-11T14:50:00.917594'
running: 1
size: 1
This will provide you with a list of Events that can be very useful to determine what is the cause of the problem.
Apart of that, you can query the status of the associated service in the host where the service is running:
[root@magna094 ~]# systemctl status ceph-c97c2c8c-0942-11eb-ae18-002590fbecb6.magna094.thdjui.service
● ceph-c97c2c8c-0942-11eb-ae18-002590fbecb6.magna094.thdjui.service - Ceph iscsi.iscsi.magna094.thdjui for c97c2c8c-0942-11eb-ae18-002590fbecb6
Loaded: loaded (/etc/systemd/system/ceph-c97c2c8c-0942-11eb-ae18-002590fbecb6@.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2020-12-22 09:56:24 UTC; 1 months 20 days ago
Main PID: 815154 (conmon)
Tasks: 20 (limit: 204376)
Memory: 50.0M
CGroup: /system.slice/system-ceph\x2dc97c2c8c\x2d0942\x2d11eb\x2dae18\x2d002590fbecb6.slice/ceph-c97c2c8c-0942-11eb-ae18-002590fbecb6.magna094.thdjui.service
└─814846 /bin/podman run --rm --ipc=host --net=host --entrypoint /usr/bin/tcmu-runner --privileged --group-add=disk --name ceph-c97c2c8c-0942-11eb-ae18-002590fbecb6-iscsi.iscsi.magna094.thdjui-tcmu -e >
‣ 815154 /usr/libexec/podman/conmon -s -c 04fa58f00802da34dc41a401d54f230199a9cad92569090b403dfe49b9efdc78 -u 04fa58f00802da34dc41a401d54f230199a9cad92569090b403dfe49b9efdc78 -n ceph-c97c2c8c-0942-11eb>
Dec 22 09:56:21 magna094 podman[814945]: 2020-12-22 09:56:21.500153363 +0000 UTC m=+1.270133117 container create 04fa58f00802da34dc41a401d54f230199a9cad92569090b403dfe49b9efdc78 (image=registry-proxy.engineering.>
Dec 22 09:56:21 magna094 podman[814846]: 2020-12-22 09:56:21.583464823 +0000 UTC m=+2.653584034 container init 4afa9b7d6785eca0d19bf314e174467827403e11043fe3bb6a47ac4ba35b617a (image=registry-proxy.engineering.re>
Dec 22 09:56:21 magna094 podman[814846]: 2020-12-22 09:56:21.691772923 +0000 UTC m=+2.761892147 container start 4afa9b7d6785eca0d19bf314e174467827403e11043fe3bb6a47ac4ba35b617a (image=registry-proxy.engineering.r>
Dec 22 09:56:21 magna094 podman[814846]: 2020-12-22 09:56:21.691890444 +0000 UTC m=+2.762009645 container attach 4afa9b7d6785eca0d19bf314e174467827403e11043fe3bb6a47ac4ba35b617a (image=registry-proxy.engineering.>
Dec 22 09:56:21 magna094 bash[814672]: log file path now is '/var/log/tcmu-runner.log'
Dec 22 09:56:21 magna094 bash[814672]: Starting...
Dec 22 09:56:23 magna094 podman[814945]: 2020-12-22 09:56:23.808574477 +0000 UTC m=+3.578554210 container init 04fa58f00802da34dc41a401d54f230199a9cad92569090b403dfe49b9efdc78 (image=registry-proxy.engineering.re>
Dec 22 09:56:23 magna094 podman[814945]: 2020-12-22 09:56:23.908518826 +0000 UTC m=+3.678498549 container start 04fa58f00802da34dc41a401d54f230199a9cad92569090b403dfe49b9efdc78 (image=registry-proxy.engineering.r>
Dec 22 09:56:23 magna094 bash[814672]: 04fa58f00802da34dc41a401d54f230199a9cad92569090b403dfe49b9efdc78
Dec 22 09:56:24 magna094 systemd[1]: Started Ceph iscsi.iscsi.magna094.thdjui for c97c2c8c-0942-11eb-ae18-002590fbecb6.
And besides that, it the container is running or stopped you can get the container logs.
@Juan, Will deploy iscsi service when we have the latest build and verify if the issue can be reproduced. I'm setting Fixed In Version to to the current downstream build. Issue is not seen with latest compose.We services running after deployment. Hence, moving the issue to verified state. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294 |
Created attachment 1725941 [details] iscsi Description of problem: [cephadm] 5.0 - ISCSI services are not up after deploying via orch cli commands Version-Release number of selected component (if applicable): [root@magna094 ubuntu]# ./cephadm version Using recent ceph image registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445 ceph version 16.0.0-6275.el8cp (d1e0606106224ac333f1c245150d7484cb626841) pacific (dev) [root@magna094 ubuntu]# How reproducible: Steps to Reproduce: 1. Install a bootstrap cluster with cephadm and the dashboard service enabled. 2. # cephadm shell 3. Install ISCSI using the below commands ceph osd pool create pool-name rbd pool init pool-name rbd create image-name --size 4096 --image-feature layering -m 192.168.122.65 -k /etc/ceph/ceph.keyring -p pool-name rbd map pool-name/image-name --id admin -k /etc/ceph/ceph.keyring ceph orch apply iscsi pool-name admin admin --placement=<host where you want iscsi> 4. ISCSI deployment commands is successful but services for ISCSI is not up and we do not see the same in ceph orch ls command also Actual results: ISCSI is not working and services are not up Expected results: ISCSI should have been up and runninh Additional info: magna094 - bootstrap root/q podman ps -a--->> output [root@magna094 ubuntu]# podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 6c6b98eae024 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445 /usr/bin/ganesha.... 3 days ago Up 3 days ago ceph-c97c2c8c-0942-11eb-ae18-002590fbecb6-nfs.ganesha-testnfs.magna094 2a276f50cd3b docker.io/prom/prometheus:v2.18.1 /bin/prometheus -... 12 days ago Up 12 days ago ceph-c97c2c8c-0942-11eb-ae18-002590fbecb6-prometheus.magna094 ceaf29de3e30 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445 /usr/bin/ceph-osd... 2 weeks ago Up 2 weeks ago ceph-c97c2c8c-0942-11eb-ae18-002590fbecb6-osd.1 7c1e8291a8e4 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445 /usr/bin/ceph-osd... 2 weeks ago Up 2 weeks ago ceph-c97c2c8c-0942-11eb-ae18-002590fbecb6-osd.2 0d418bd31d48 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445 /usr/bin/ceph-osd... 2 weeks ago Up 2 weeks ago ceph-c97c2c8c-0942-11eb-ae18-002590fbecb6-osd.0 d61adc9a1d04 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445 /usr/bin/ceph-cra... 2 weeks ago Up 2 weeks ago ceph-c97c2c8c-0942-11eb-ae18-002590fbecb6-crash.magna094 fab464f600bc registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445 /usr/bin/ceph-mon... 2 weeks ago Up 2 weeks ago ceph-c97c2c8c-0942-11eb-ae18-002590fbecb6-mon.magna094 86864cd80f5a registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445 /usr/bin/ceph-mgr... 2 weeks ago Up 2 weeks ago ceph-c97c2c8c-0942-11eb-ae18-002590fbecb6-mgr.magna094.hussmr e7a9a4c6414a docker.io/ceph/ceph-grafana:6.6.2 /bin/sh -c grafan... 3 weeks ago Up 3 weeks ago ceph-c97c2c8c-0942-11eb-ae18-002590fbecb6-grafana.magna094 ed4af8315a7e docker.io/prom/alertmanager:v0.20.0 /bin/alertmanager... 3 weeks ago Up 3 weeks ago ceph-c97c2c8c-0942-11eb-ae18-002590fbecb6-alertmanager.magna094 b93cc97bdd69 docker.io/prom/node-exporter:v0.18.1 /bin/node_exporte... 3 weeks ago Up 3 weeks ago ceph-c97c2c8c-0942-11eb-ae18-002590fbecb6-node-exporter.magna094 Ceph orch ls --->> [ceph: root@magna094 /]# ceph orch ls NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID alertmanager 1/1 7m ago 3w count:1 docker.io/prom/alertmanager:v0.20.0 0881eb8f169f crash 9/9 7m ago 3w * registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445 0158d7274861 grafana 1/1 7m ago 3w count:1 docker.io/ceph/ceph-grafana:6.6.2 a0dce381714a iscsi.iscsi 0/2 - - magna092;magna093;count:2 <unknown> <unknown> mds.test 3/3 7m ago 3d count:3 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445 0158d7274861 mgr 2/2 7m ago 3w count:2 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445 0158d7274861 mon 3/3 7m ago 2w magna094;magna067;magna073;count:3 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445 0158d7274861 nfs.ganesha-testnfs 1/1 7m ago 3d count:1 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445 0158d7274861 node-exporter 9/9 7m ago 3w * docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf osd.None 10/0 7m ago - <unmanaged> registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445 0158d7274861 osd.all-available-devices 17/17 7m ago 6d * registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445 0158d7274861 prometheus 1/1 7m ago 3w count:1 docker.io/prom/prometheus:v2.18.1 de242295e225 rgw.myorg.us-east-1 2/2 7m ago 4d magna092;magna093;count:2 registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-96803-20201013192445 0158d7274861 Attached screen shot for reference [ceph: root@magna094 /]# ceph -s cluster: id: c97c2c8c-0942-11eb-ae18-002590fbecb6 health: HEALTH_OK services: mon: 3 daemons, quorum magna094,magna067,magna073 (age 2w) mgr: magna094.hussmr(active, since 2w), standbys: magna067.cudixx mds: test:1 {0=test.magna076.xymdrn=up:active} 2 up:standby osd: 27 osds: 27 up (since 5d), 27 in (since 5d) rgw: 2 daemons active (myorg.us-east-1.magna092.bxiihn, myorg.us-east-1.magna093.nhekwk) data: pools: 10 pools, 265 pgs objects: 433 objects, 6.8 MiB usage: 2.2 GiB used, 25 TiB / 25 TiB avail pgs: 265 active+clean io: client: 85 B/s rd, 0 op/s rd, 0 op/s wr