Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2408795

Summary: [9.0][CephAdm] Unable to set cepahdm signed cert for RGW service using certmgr enhancements
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Sayalee <saraut>
Component: CephadmAssignee: Redouane Kachach Elhichou <rkachach>
Status: CLOSED ERRATA QA Contact: Sayalee <saraut>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 9.0CC: adking, akane, cephqe-warriors, rkachach, sabose, tserlin
Target Milestone: ---   
Target Release: 9.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-20.1.0-93 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2026-01-29 07:02:50 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sayalee 2025-10-31 09:11:08 UTC
Description of problem:
-----------------------
When trying to redeploy RGW service to enable cephadm signed cert, its not successful. No cert is added.


Version-Release number of selected component (if applicable):
------------------------------------------------------------
Ceph image is on UBI9-
[ceph: root@ceph-saraut-r10-0ykf39-node1-installer /]# cat /etc/redhat-release
Red Hat Enterprise Linux release 9.6 (Plow)

Ceph version is -
[ceph: root@ceph-saraut-r10-0ykf39-node1-installer /]# ceph version
ceph version 20.1.0-71.el9cp (c0efca2cdfd50cabbdcd84d19e032c75376bc1b4) tentacle (rc - RelWithDebInfo)

RHEL host is on -
 [root@ceph-saraut-r10-0ykf39-node1-installer ~]# cat /etc/redhat-release
Red Hat Enterprise Linux release 10.0 (Coughlan)


How reproducible:
--------------------
Always


Steps to Reproduce:
--------------------
1. On RHCS 9.0 cluster, do below steps 

[root@ceph-saraut-r10-0ykf39-node1-installer ~]# cat rgw_new.yaml
service_type: rgw
service_id: rgw.1
service_name: rgw.rgw.1
placement:
  label: rgw
spec:
  rgw_exit_timeout_secs: 120
  ssl: true

[root@ceph-saraut-r10-0ykf39-node1-installer ~]# cephadm shell --mount rgw_new.yaml:/var/lib/ceph/rgw_new.yaml
Inferring fsid f6e30492-b573-11f0-940e-fa163e3dde4a
Inferring config /var/lib/ceph/f6e30492-b573-11f0-940e-fa163e3dde4a/mon.ceph-saraut-r10-0ykf39-node1-installer/config


[ceph: root@ceph-saraut-r10-0ykf39-node1-installer /]# cat /var/lib/ceph/rgw_new.yaml
service_type: rgw
service_id: rgw.1
service_name: rgw.rgw.1
placement:
  label: rgw
spec:
  rgw_exit_timeout_secs: 120
  ssl: true


[ceph: root@ceph-saraut-r10-0ykf39-node1-installer /]# ceph orch apply -i /var/lib/ceph/rgw_new.yaml
Scheduled rgw.rgw.1 update...



[ceph: root@ceph-saraut-r10-0ykf39-node1-installer /]# ceph orch ps --daemon-type rgw
NAME                                           HOST                          PORTS  STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION          IMAGE ID      CONTAINER ID
rgw.rgw.1.ceph-saraut-r10-0ykf39-node2.dzwrof  ceph-saraut-r10-0ykf39-node2  *:443  running (96m)     2m ago  96m    95.1M        -  20.1.0-71.el9cp  c37cfdc64549  2097ceb5b791


[ceph: root@ceph-saraut-r10-0ykf39-node1-installer /]# ceph orch ls --service-type rgw --export
service_type: rgw
service_id: rgw.1
service_name: rgw.rgw.1
placement:
  label: rgw
spec:
  certificate_source: cephadm-signed
  rgw_exit_timeout_secs: 120
  ssl: true


[ceph: root@ceph-saraut-r10-0ykf39-node1-installer /]# ceph orch certmgr cert ls --include-cephadm-signed
grafana_ssl_cert
  scope: host
  certificates
    ceph-saraut-r10-0ykf39-node1-installer
      subject
        countryName: IN
        stateOrProvinceName: MH
        localityName: BLR
        organizationName: IBM
        organizationalUnitName: xyz
        commonName: ibm
        1.2.840.113549.1.9.1: abc
      validity
        remaining_days: 4
cephadm_root_ca_cert
  scope: global
  certificates
    subject
      commonName: cephadm-root-f6e30492-b573-11f0-940e-fa163e3dde4a
    validity
      remaining_days: 3652
cephadm-signed_agent_cert
  scope: host
  certificates
    ceph-saraut-r10-0ykf39-node1-installer
      subject
        commonName: 10.0.65.42
      validity
        remaining_days: 1824
cephadm-signed_grafana_cert
  scope: host
  certificates
    ceph-saraut-r10-0ykf39-node1-installer
      subject
        commonName: 10.0.65.42
      validity
        remaining_days: 1094



Actual results:
--------------------
cephadm signed cert for RGW service is not set


Expected results:
--------------------
cephadm signed cert for RGW service should be set properly 


Additional info:


[ceph: root@ceph-saraut-r10-0ykf39-node1-installer /]# ceph -s
  cluster:
    id:     f6e30492-b573-11f0-940e-fa163e3dde4a
    health: HEALTH_WARN
            Detected 1 cephadm certificate(s) issues: 1 expiring
            insufficient standby MDS daemons available
            1 pool(s) have non-power-of-two pg_num
            too many PGs per OSD (256 > max 250)

  services:
    mon: 3 daemons, quorum ceph-saraut-r10-0ykf39-node1-installer,ceph-saraut-r10-0ykf39-node2,ceph-saraut-r10-0ykf39-node3 (age 23h) [leader: ceph-saraut-r10-0ykf39-node1-installer]
    mgr: ceph-saraut-r10-0ykf39-node1-installer.kxybbx(active, since 23h), standbys: ceph-saraut-r10-0ykf39-node2.shqsgl
    mds: 1/1 daemons up
    osd: 6 osds: 6 up (since 23h), 6 in (since 23h)
    rgw: 1 daemon active (1 hosts, 1 zones)

  data:
    volumes: 1/1 healthy
    pools:   13 pools, 512 pgs
    objects: 44.32k objects, 1.3 GiB
    usage:   6.1 GiB used, 54 GiB / 60 GiB avail
    pgs:     512 active+clean


[ceph: root@ceph-saraut-r10-0ykf39-node1-installer /]# ceph orch host ls
HOST                                    ADDR         LABELS                              STATUS
ceph-saraut-r10-0ykf39-node1-installer  10.0.65.42   _admin,installer,crash,mon,osd,mgr
ceph-saraut-r10-0ykf39-node2            10.0.65.248  nfs,crash,mon,osd,mgr,rgw
ceph-saraut-r10-0ykf39-node3            10.0.67.145  nfs,crash,mds,mon,osd,mgr
3 hosts in cluster

Comment 14 errata-xmlrpc 2026-01-29 07:02:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 9.0 Security and Enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2026:1536