Description of problem:[cephadm]5.0 - Node exporter service not coming up after bootstrap a cluster with registry.redhat.io Version-Release number of selected component (if applicable): [root@magna106 ubuntu]# ./cephadm version Using recent ceph image registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest ceph version 16.0.0-7209.el8cp (dc005a4e27b091d75a4fd83f9972f7fcdf9f2e18) pacific (dev) [root@magna106 ubuntu]# rpm -qa | grep cephadm cephadm-16.0.0-7209.el8cp.x86_64 [root@magna106 ubuntu]# How reproducible: Steps to Reproduce: 1. Boot strap a cluster by following the alpha doc 2. observe the behaviour 3. Actual results: Node exporter is not showing up status, version, image id, container id Expected results: service status along with the above details should be displayed workaround : you need to manually pull the container image to get the node exporter service up and running Additional info: magna106 root/q output: [ceph: root@magna106 /]# ceph orch ps NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID alertmanager.magna106 magna106 running (100s) 77s ago 5m 0.20.0 registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.5 15b662152463 0a7332f76f84 crash.magna106 magna106 running (5m) 77s ago 5m 16.0.0-7209.el8cp registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest 5e61b782a54e a62fae9a237c grafana.magna106 magna106 running (95s) 77s ago 5m 6.7.4 registry.redhat.io/rhceph-alpha/rhceph-5-dashboard-rhel8:latest b8f7610c6ea6 10a5e7f11fa4 mgr.magna106.uilzxe magna106 running (7m) 77s ago 7m 16.0.0-7209.el8cp registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest 5e61b782a54e 8dd1208d0d81 mon.magna106 magna106 running (7m) 77s ago 7m 16.0.0-7209.el8cp registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest 5e61b782a54e 9c22b4d68710 node-exporter.magna106 magna106 unknown 77s ago 86s <unknown> registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5 <unknown> <unknown> prometheus.magna106 magna106 running (88s) 77s ago 110s 2.21.0 registry.redhat.io/openshift4/ose-prometheus:v4.6 23c70a072832 9c57c39cb1b1 [ceph: root@magna106 /]# ceph orch ls NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID alertmanager 1/1 108s ago 7m count:1 registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.5 15b662152463 crash 1/1 108s ago 7m * registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest 5e61b782a54e grafana 1/1 108s ago 7m count:1 registry.redhat.io/rhceph-alpha/rhceph-5-dashboard-rhel8:latest b8f7610c6ea6 mgr 1/2 108s ago 7m count:2 registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest 5e61b782a54e mon 1/5 108s ago 7m count:5 registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest 5e61b782a54e node-exporter 0/1 108s ago 7m * registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5 <unknown> prometheus 1/1 108s ago 7m count:1 registry.redhat.io/openshift4/ose-prometheus:v4.6 23c70a072832 [ceph: root@magna106 /]#
We'll take this downstream in the next pacific rebase for 5.0.
Issue is not seen in the latest alpha drop node-exporter 4/4 8m ago 4d * registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5 f0a5cfd22f16
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294