Bug 2187265

Summary: [Dashboard] Landing page has a hyperlink for Manager page even though it does not exist
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Sayalee <saraut>
Component: Ceph-DashboardAssignee: Nizamudeen <nia>
Status: CLOSED ERRATA QA Contact: Sayalee <saraut>
Severity: high Docs Contact: Akash Raj <akraj>
Priority: unspecified    
Version: 6.1CC: akraj, ceph-eng-bugs, cephqe-warriors, msaini, nia, tserlin
Target Milestone: ---   
Target Release: 6.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-17.2.6-23.el9cp Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-06-15 09:17:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sayalee 2023-04-17 09:47:58 UTC
Created attachment 1957814 [details]
Check_Manager_hyperlink_on_Dashboard_landing_page

Description of problem:
=======================
The Ceph dashboard landing page has a hyperlink for Manager page, even though there is NO page for MGR 


Version-Release number of selected component (if applicable):
=============================================================
ceph version 17.2.6-21.el9cp (9ca345dbaedb31f7b7ef0435c8e6b3f811bbcb19) quincy (stable)


How reproducible:
=================
Always


Steps to Reproduce:
===================
1. Deploy a RHCS 6.1 cluster with dashboard enabled
2. Login to Dashboar >> go to the Inventory >> Manager >> Click on hyperlink


Actual results:
================
Clicking on hyperlink results in "Page not found" error, which is expected as there is no page for Manager
(please check the attached screenshot)


Expected results:
=================
There should not be a hyperlink for Manager


Additional info:
=================
ceph: root@ceph-saraut-6-1-ickncj-node1-installer /]# ceph version
ceph version 17.2.6-21.el9cp (9ca345dbaedb31f7b7ef0435c8e6b3f811bbcb19) quincy (stable)


[ceph: root@ceph-saraut-6-1-ickncj-node1-installer /]# ceph -s
  cluster:
    id:     65ab9ea6-dcf6-11ed-9a44-fa163eba73f2
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-saraut-6-1-ickncj-node1-installer,ceph-saraut-6-1-ickncj-node3,ceph-saraut-6-1-ickncj-node2 (age 52m)
    mgr: ceph-saraut-6-1-ickncj-node3.duegqa(active, since 21m)
    mds: 1/1 daemons up, 1 standby
    osd: 18 osds: 18 up (since 50m), 18 in (since 51m)
    rgw: 2 daemons active (2 hosts, 1 zones)
 
  data:
    volumes: 1/1 healthy
    pools:   10 pools, 273 pgs
    objects: 44.35k objects, 1.3 GiB
    usage:   6.5 GiB used, 263 GiB / 270 GiB avail
    pgs:     273 active+clean
 
  io:
    client:   71 KiB/s rd, 0 B/s wr, 71 op/s rd, 47 op/s wr

Comment 9 errata-xmlrpc 2023-06-15 09:17:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 6.1 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:3623