Bug 1901010 - [cephadm]5.0 - Node exporter service not coming up after bootstrap a cluster with registry.redhat.io
Summary: [cephadm]5.0 - Node exporter service not coming up after bootstrap a cluster ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: 5.0
Assignee: Daniel Pivonka
QA Contact: Vasishta
Karen Norteman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-11-24 09:23 UTC by Preethi
Modified: 2021-08-30 08:27 UTC (History)
2 users (show)

Fixed In Version: ceph-16.0.0-9150.el8cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-30 08:27:12 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 38141 0 None closed cephadm: fix podman failure to pull authenticated registry image from systemd unit 2021-02-15 09:19:17 UTC
Red Hat Issue Tracker RHCEPH-1180 0 None None None 2021-08-30 00:15:42 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:27:26 UTC

Description Preethi 2020-11-24 09:23:45 UTC
Description of problem:[cephadm]5.0 - Node exporter service not coming up after bootstrap a cluster with registry.redhat.io


Version-Release number of selected component (if applicable):
[root@magna106 ubuntu]# ./cephadm version
Using recent ceph image registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest
ceph version 16.0.0-7209.el8cp (dc005a4e27b091d75a4fd83f9972f7fcdf9f2e18) pacific (dev)
[root@magna106 ubuntu]# rpm -qa | grep cephadm
cephadm-16.0.0-7209.el8cp.x86_64
[root@magna106 ubuntu]# 


How reproducible:


Steps to Reproduce:
1. Boot strap a cluster by following the alpha doc
2. observe  the behaviour
3.

Actual results: Node exporter is not showing up status, version, image id, container id




Expected results: service status along with the above details should be displayed




workaround : you need to manually pull the container image to get the node exporter service up and running

Additional info:
magna106
root/q


output:
[ceph: root@magna106 /]# ceph orch ps
NAME                    HOST      STATUS          REFRESHED  AGE   VERSION            IMAGE NAME                                                       IMAGE ID      CONTAINER ID  
alertmanager.magna106   magna106  running (100s)  77s ago    5m    0.20.0             registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.5   15b662152463  0a7332f76f84  
crash.magna106          magna106  running (5m)    77s ago    5m    16.0.0-7209.el8cp  registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest            5e61b782a54e  a62fae9a237c  
grafana.magna106        magna106  running (95s)   77s ago    5m    6.7.4              registry.redhat.io/rhceph-alpha/rhceph-5-dashboard-rhel8:latest  b8f7610c6ea6  10a5e7f11fa4  
mgr.magna106.uilzxe     magna106  running (7m)    77s ago    7m    16.0.0-7209.el8cp  registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest            5e61b782a54e  8dd1208d0d81  
mon.magna106            magna106  running (7m)    77s ago    7m    16.0.0-7209.el8cp  registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest            5e61b782a54e  9c22b4d68710  
node-exporter.magna106  magna106  unknown         77s ago    86s   <unknown>          registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5  <unknown>     <unknown>     
prometheus.magna106     magna106  running (88s)   77s ago    110s  2.21.0             registry.redhat.io/openshift4/ose-prometheus:v4.6                23c70a072832  9c57c39cb1b1  
[ceph: root@magna106 /]# ceph orch ls
NAME           RUNNING  REFRESHED  AGE  PLACEMENT  IMAGE NAME                                                       IMAGE ID      
alertmanager       1/1  108s ago   7m   count:1    registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.5   15b662152463  
crash              1/1  108s ago   7m   *          registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest            5e61b782a54e  
grafana            1/1  108s ago   7m   count:1    registry.redhat.io/rhceph-alpha/rhceph-5-dashboard-rhel8:latest  b8f7610c6ea6  
mgr                1/2  108s ago   7m   count:2    registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest            5e61b782a54e  
mon                1/5  108s ago   7m   count:5    registry.redhat.io/rhceph-alpha/rhceph-5-rhel8:latest            5e61b782a54e  
node-exporter      0/1  108s ago   7m   *          registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5  <unknown>     
prometheus         1/1  108s ago   7m   count:1    registry.redhat.io/openshift4/ose-prometheus:v4.6                23c70a072832  
[ceph: root@magna106 /]#

Comment 2 Ken Dreyer (Red Hat) 2021-01-15 17:22:49 UTC
We'll take this downstream in the next pacific rebase for 5.0.

Comment 7 Preethi 2021-03-08 17:24:52 UTC
Issue is not seen in the latest alpha drop

node-exporter                    4/4  8m ago     4d    *                                   registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5                                                               f0a5cfd22f16

Comment 9 errata-xmlrpc 2021-08-30 08:27:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.