Bug 2184376 - [Dashboard] Warning message about vonage-status-panel in podman logs
Summary: [Dashboard] Warning message about vonage-status-panel in podman logs
Keywords:
Status: MODIFIED
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Dashboard
Version: 5.3
Hardware: Unspecified
OS: Unspecified
low
medium
Target Milestone: ---
: 7.0
Assignee: Nizamudeen
QA Contact: Sayalee
Anjana Suparna Sriram
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-04-04 13:02 UTC by Sayalee
Modified: 2023-07-17 05:07 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-6373 0 None None None 2023-04-04 13:04:08 UTC
Red Hat Issue Tracker RHCSDASH-949 0 None None None 2023-04-04 13:04:10 UTC

Description Sayalee 2023-04-04 13:02:04 UTC
Description of problem:
=======================
The issue w.r.t. Grafana panel is fixed (check https://bugzilla.redhat.com/show_bug.cgi?id=2133762), but still warning msgs regarding "vonage-status-panel" are observed in the podman logs


Version-Release number of selected component (if applicable):
==============================================================
16.2.10-160.el8cp


How reproducible:
=================
Always


Steps to Reproduce:
===================
1) Deploy a ceph cluster with dashboard enabled (build - 16.2.10-160.el8cp).
2) Use command #ceph dashboard get-grafana-api-url to get the Grafana API url
3) Through CLI, run the command on node where Grafana service is running - # podman logs $(podman ps | awk ' /grafana/ { print $1 } ') | grep -i skipping


Actual results:
===============
[root@ceph-saraut-5-3-z2-y4i1cu-node1-installer ~]# podman logs $(podman ps | awk ' /grafana/ { print $1 } ') | grep -i skipping
t=2023-04-04T08:37:26+0000 lvl=warn msg="Skipping finding plugins as directory does not exist" logger=plugin.finder dir=/usr/share/grafana/plugins-bundled
t=2023-04-04T08:37:26+0000 lvl=warn msg="Skipping loading plugin due to problem with signature" logger=plugin.loader pluginID=vonage-status-panel status=unsigned


Expected results:
=================
As mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=2133762#c30; the warning messages because we are installing the plugin, we should NOT be seeing the warning messages.


Additional info:
================
[ceph: root@ceph-saraut-5-3-z2-y4i1cu-node1-installer /]# ceph -s
  cluster:
    id:     6b3e0ea4-d2c3-11ed-b1fd-fa163ed82054
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-saraut-5-3-z2-y4i1cu-node1-installer,ceph-saraut-5-3-z2-y4i1cu-node2,ceph-saraut-5-3-z2-y4i1cu-node3 (age 38m)
    mgr: ceph-saraut-5-3-z2-y4i1cu-node1-installer.tveepw(active, since 39m), standbys: ceph-saraut-5-3-z2-y4i1cu-node2.wmgogu
    mds: 1/1 daemons up, 1 standby
    osd: 18 osds: 18 up (since 35m), 18 in (since 36m)
    rgw: 2 daemons active (2 hosts, 1 zones)
 
  data:
    volumes: 1/1 healthy
    pools:   10 pools, 289 pgs
    objects: 44.30k objects, 1.3 GiB
    usage:   5.6 GiB used, 264 GiB / 270 GiB avail
    pgs:     289 active+clean


[root@ceph-saraut-5-3-z2-y4i1cu-node1-installer ~]# cephadm shell
Inferring fsid 6b3e0ea4-d2c3-11ed-b1fd-fa163ed82054
Using recent ceph image registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:2433c9a2159075f977252dc27a5ed51999269434412ec6b00b94ff16e1172e9d


[ceph: root@ceph-saraut-5-3-z2-y4i1cu-node1-installer /]# ceph mgr services
{
    "dashboard": "https://10.0.209.239:8443/",
    "prometheus": "http://10.0.209.239:9283/"
}


[ceph: root@ceph-saraut-5-3-z2-y4i1cu-node1-installer /]# ceph versions
{
    "mon": {
        "ceph version 16.2.10-160.el8cp (6977980612de1db28e41e0a90ff779627cde7a8c) pacific (stable)": 3
    },
    "mgr": {
        "ceph version 16.2.10-160.el8cp (6977980612de1db28e41e0a90ff779627cde7a8c) pacific (stable)": 2
    },
    "osd": {
        "ceph version 16.2.10-160.el8cp (6977980612de1db28e41e0a90ff779627cde7a8c) pacific (stable)": 18
    },
    "mds": {
        "ceph version 16.2.10-160.el8cp (6977980612de1db28e41e0a90ff779627cde7a8c) pacific (stable)": 2
    },
    "rgw": {
        "ceph version 16.2.10-160.el8cp (6977980612de1db28e41e0a90ff779627cde7a8c) pacific (stable)": 2
    },
    "overall": {
        "ceph version 16.2.10-160.el8cp (6977980612de1db28e41e0a90ff779627cde7a8c) pacific (stable)": 27
    }
}


Note You need to log in before you can comment on or make changes to this bug.