Description of problem: After enabling the mgmt-gateway service, when we try to deploy the oauth2-proxy service using below spec file, the monitoring stack services go down including mgmt-gateway, prometheus, grafana and alertmanager: # ceph orch apply mgmt-gateway --enable_auth=true --placement=ceph-vpap-81sso-oy43lr-node1-installer # cat oauth2-proxy.yaml service_type: oauth2-proxy service_name: oauth2-proxy placement: hosts: - ceph-vpap-81sso-oy43lr-node1-installer spec: provider_display_name: "My OIDC Provider" client_id: "b9ad733c-6fe4-4539-9a99-e58fce9664aa" client_secret: "5V60wV6Nwk" oidc_issuer_url: "https://vpapnoi-mgmt-test.verify.ibm.com/oidc/endpoint/default" allowlist_domains: - 10.0.67.246:8080 - 10.0.67.246 - ceph-vpap-81sso-oy43lr-node1-installer # ceph orch ps NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID alertmanager.ceph-vpap-81sso-oy43lr-node1-installer ceph-vpap-81sso-oy43lr-node1-installer *:9093,9094 starting - - - - <unknown> <unknown> <unknown> ceph-exporter.ceph-vpap-81sso-oy43lr-node1-installer ceph-vpap-81sso-oy43lr-node1-installer running (4m) 2m ago 27h 7755k - 19.2.1-206.el9cp 67fea47ba308 959abad6d369 ceph-exporter.ceph-vpap-81sso-oy43lr-node2 ceph-vpap-81sso-oy43lr-node2 running (4m) 4m ago 27h 7412k - 19.2.1-206.el9cp 67fea47ba308 614bae661f9c ceph-exporter.ceph-vpap-81sso-oy43lr-node3 ceph-vpap-81sso-oy43lr-node3 running (4m) 4m ago 27h 7660k - 19.2.1-206.el9cp 67fea47ba308 431c5d979ea5 crash.ceph-vpap-81sso-oy43lr-node1-installer ceph-vpap-81sso-oy43lr-node1-installer running (27h) 2m ago 27h 6576k - 19.2.1-206.el9cp 67fea47ba308 7ee7d043c51c crash.ceph-vpap-81sso-oy43lr-node2 ceph-vpap-81sso-oy43lr-node2 running (27h) 4m ago 27h 6580k - 19.2.1-206.el9cp 67fea47ba308 e59ed59d46f5 crash.ceph-vpap-81sso-oy43lr-node3 ceph-vpap-81sso-oy43lr-node3 running (27h) 4m ago 27h 6580k - 19.2.1-206.el9cp 67fea47ba308 5c031a7aad89 grafana.ceph-vpap-81sso-oy43lr-node1-installer ceph-vpap-81sso-oy43lr-node1-installer *:3000 starting - - - - <unknown> <unknown> <unknown> mds.cephfs.ceph-vpap-81sso-oy43lr-node2.fgaadb ceph-vpap-81sso-oy43lr-node2 running (27h) 4m ago 27h 24.9M - 19.2.1-206.el9cp 67fea47ba308 b2392f6d0284 mds.cephfs.ceph-vpap-81sso-oy43lr-node3.zuqwhu ceph-vpap-81sso-oy43lr-node3 running (27h) 4m ago 27h 284M - 19.2.1-206.el9cp 67fea47ba308 9a7dcc45a50d mgmt-gateway.ceph-vpap-81sso-oy43lr-node1-installer ceph-vpap-81sso-oy43lr-node1-installer starting - - - - <unknown> <unknown> <unknown> mgr.ceph-vpap-81sso-oy43lr-node1-installer.cktncn ceph-vpap-81sso-oy43lr-node1-installer *:9283,8765,8443 running (27h) 2m ago 27h 551M - 19.2.1-206.el9cp 67fea47ba308 838ec0dfce7c mgr.ceph-vpap-81sso-oy43lr-node3.fuvrws ceph-vpap-81sso-oy43lr-node3 *:8443,9283,8765 running (27h) 4m ago 27h 444M - 19.2.1-206.el9cp 67fea47ba308 67e02830244d mon.ceph-vpap-81sso-oy43lr-node1-installer ceph-vpap-81sso-oy43lr-node1-installer running (27h) 2m ago 27h 386M 2048M 19.2.1-206.el9cp 67fea47ba308 e59822db07f5 mon.ceph-vpap-81sso-oy43lr-node2 ceph-vpap-81sso-oy43lr-node2 running (27h) 4m ago 27h 376M 2048M 19.2.1-206.el9cp 67fea47ba308 c755a3f35e09 mon.ceph-vpap-81sso-oy43lr-node3 ceph-vpap-81sso-oy43lr-node3 running (27h) 4m ago 27h 372M 2048M 19.2.1-206.el9cp 67fea47ba308 69b4414edb88 node-exporter.ceph-vpap-81sso-oy43lr-node1-installer ceph-vpap-81sso-oy43lr-node1-installer *:9100 running (4m) 2m ago 27h 15.2M - 1.7.0 f22df3572881 157b21b8dbf5 node-exporter.ceph-vpap-81sso-oy43lr-node2 ceph-vpap-81sso-oy43lr-node2 *:9100 running (4m) 4m ago 27h 5860k - 1.7.0 f22df3572881 121ef70dc867 node-exporter.ceph-vpap-81sso-oy43lr-node3 ceph-vpap-81sso-oy43lr-node3 *:9100 running (4m) 4m ago 27h 5127k - 1.7.0 f22df3572881 62d22b3f30da oauth2-proxy.ceph-vpap-81sso-oy43lr-node1-installer ceph-vpap-81sso-oy43lr-node1-installer *:4180 running (2m) 2m ago 2m 7391k - v7.6.0 1f99cc771ffe 5eeee6313a6d osd.0 ceph-vpap-81sso-oy43lr-node2 running (27h) 4m ago 27h 433M 4096M 19.2.1-206.el9cp 67fea47ba308 40bc8aaebbb8 osd.1 ceph-vpap-81sso-oy43lr-node1-installer running (27h) 2m ago 27h 467M 4096M 19.2.1-206.el9cp 67fea47ba308 6741cc14f8a6 osd.2 ceph-vpap-81sso-oy43lr-node3 running (27h) 4m ago 27h 480M 4096M 19.2.1-206.el9cp 67fea47ba308 74487d61b55b osd.3 ceph-vpap-81sso-oy43lr-node2 running (27h) 4m ago 27h 520M 4096M 19.2.1-206.el9cp 67fea47ba308 796ea3782a50 osd.4 ceph-vpap-81sso-oy43lr-node1-installer running (27h) 2m ago 27h 459M 4096M 19.2.1-206.el9cp 67fea47ba308 aab577a6c38c osd.5 ceph-vpap-81sso-oy43lr-node3 running (27h) 4m ago 27h 531M 4096M 19.2.1-206.el9cp 67fea47ba308 961a7509ac35 osd.6 ceph-vpap-81sso-oy43lr-node2 running (27h) 4m ago 27h 461M 4096M 19.2.1-206.el9cp 67fea47ba308 19f0c16abdba osd.7 ceph-vpap-81sso-oy43lr-node1-installer running (27h) 2m ago 27h 496M 4096M 19.2.1-206.el9cp 67fea47ba308 337a62f8dffb osd.8 ceph-vpap-81sso-oy43lr-node3 running (27h) 4m ago 27h 408M 4096M 19.2.1-206.el9cp 67fea47ba308 35362bc22c73 prometheus.ceph-vpap-81sso-oy43lr-node1-installer ceph-vpap-81sso-oy43lr-node1-installer *:9095 starting - - - - <unknown> <unknown> <unknown> rgw.rgw.1.ceph-vpap-81sso-oy43lr-node2.xwgoww ceph-vpap-81sso-oy43lr-node2 *:80 running (27h) 4m ago 27h 96.4M - 19.2.1-206.el9cp 67fea47ba308 d54d90404d78 rgw.rgw.1.ceph-vpap-81sso-oy43lr-node3.wtmujf ceph-vpap-81sso-oy43lr-node3 *:80 running (27h) 4m ago 27h 92.4M - 19.2.1-206.el9cp 67fea47ba308 4df66cf99baa # ceph orch ls NAME PORTS RUNNING REFRESHED AGE PLACEMENT alertmanager ?:9093,9094 0/1 - 28h count:1 ceph-exporter 3/3 8m ago 28h * crash 3/3 8m ago 28h * grafana ?:3000 0/1 - 28h count:1 mds.cephfs 2/2 8m ago 28h label:mds mgmt-gateway 0/1 - 9m ceph-vpap-81sso-oy43lr-node1-installer mgr 2/2 8m ago 28h count:2 mon 3/5 8m ago 28h count:5 node-exporter ?:9100 3/3 8m ago 28h * oauth2-proxy ?:4180 1/1 52s ago 57s ceph-vpap-81sso-oy43lr-node1-installer osd.all-available-devices 9 8m ago 28h * prometheus ?:9095 0/1 - 28h count:1 rgw.rgw.1 ?:80 2/2 8m ago 28h label:rgw Below traceback is observed : Traceback (most recent call last): File "/usr/share/ceph/mgr/cephadm/module.py", line 803, in serve serve.serve() File "/usr/share/ceph/mgr/cephadm/serve.py", line 123, in serve self._check_daemons() File "/usr/share/ceph/mgr/cephadm/serve.py", line 1138, in _check_daemons deps = self.mgr._calc_daemon_deps(spec, dd.daemon_type, dd.daemon_id, dd.hostname) File "/usr/share/ceph/mgr/cephadm/module.py", line 3145, in _calc_daemon_deps deps = svc_cls.get_dependencies(self, spec, daemon_type, hostname) if svc_cls else [] TypeError: get_dependencies() takes from 2 to 4 positional arguments but 5 were given Version-Release number of selected component (if applicable): ceph 8.1 19.2.1-208.el9cp How reproducible: Always Steps to Reproduce: 1. deploy a ceph 8.1 cluster 2. deploy mgmt-gateway and oauth2-proxy services as mentioned in the description 3. Actual results: monitoring stack services go down Expected results: monitoring stack services should be reconfigured with the oauth2-proxy settings and should be back to running state Additional info:
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage 8.1 security, bug fix, and enhancement updates), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2025:9775