Bug 2160398

Summary: [Ceph-Dashboard] Allow CORS if the origin ip is known
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Sayalee <saraut>
Component: Ceph-DashboardAssignee: Nizamudeen <nia>
Status: CLOSED ERRATA QA Contact: Sayalee <saraut>
Severity: urgent Docs Contact: asriram <asriram>
Priority: unspecified    
Version: 5.3CC: cephqe-warriors, tserlin, vdas, vumrao
Target Milestone: ---   
Target Release: 5.3z1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-16.2.10-98.el8cp Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-02-28 10:06:24 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sayalee 2023-01-12 09:33:44 UTC
Description of problem:
=======================
Fix present in https://github.com/ceph/ceph/pull/49329 for the blue washing requirement from IBM is not added to RHCS 5.3 build.
As a result, while hitting API using option/ freight call we are not getting ,Access-Control-Allow-Origin for header Origin


Version-Release number of selected component (if applicable):
=============================================================
RHCS 5.3
ceph version 16.2.10-94.el8cp (48ce8ed67474ea50f10c019b9445be7f49749d23) pacific (stable)


How reproducible:
================
Always


Steps to Reproduce:
1.
2.
3.

Actual result:
==============
Not getting ,Access-Control-Allow-Origin for header Origin


Expected results:
=================
url added in the Access-Control-Allow-Origin request header should be
allowed for CORS, hence add the https://github.com/ceph/ceph/pull/49329 PR to RHCS 5.3


Additional info:
================
[ceph: root@vm-596 /]# ceph versions
{
    "mon": {
        "ceph version 16.2.10-94.el8cp (48ce8ed67474ea50f10c019b9445be7f49749d23) pacific (stable)": 4
    },
    "mgr": {
        "ceph version 16.2.10-94.el8cp (48ce8ed67474ea50f10c019b9445be7f49749d23) pacific (stable)": 2
    },
    "osd": {
        "ceph version 16.2.10-94.el8cp (48ce8ed67474ea50f10c019b9445be7f49749d23) pacific (stable)": 5
    },
    "mds": {},
    "overall": {
        "ceph version 16.2.10-94.el8cp (48ce8ed67474ea50f10c019b9445be7f49749d23) pacific (stable)": 11
    }
}


[ceph: root@vm-596 /]# ceph -s
  cluster:
    id:     6d8162d8-619e-11ed-a7e3-00505684ce60
    health: HEALTH_OK
 
  services:
    mon: 4 daemons, quorum vm-596,vm-446,vm-326,vm-438 (age 2h)
    mgr: vm-596.jiigyc(active, since 2h), standbys: vm-446.qrweoh
    osd: 5 osds: 5 up (since 2h), 5 in (since 8w)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   39 MiB used, 420 GiB / 420 GiB avail
    pgs:     1 active+clean
 

[ceph: root@vm-596 /]# ceph mgr services
{
    "dashboard": "https://9.114.193.98:8443/",
    "prometheus": "http://9.114.193.98:9283/"
}

Comment 5 errata-xmlrpc 2023-02-28 10:06:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 5.3 Bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:0980