Description of problem: ======================== [IBMSC][8.0][Live][Bootstrap]Bootstrap giving log message "This is a development version of cephadm. While doing bootstrap with default image we are observing below log message Message: This is a development version of cephadm. For information regarding the latest stable release: https://docs.ceph.com/docs/squid/cephadm/install Note: QE not observed in test run since bootstrap done with test image explicitly (First major release) RCA: ==== DEFAULT_IMAGE_IS_MAIN = True https://github.com/ceph/ceph/blob/squid/src/cephadm/cephadmlib/constants.py Log: ====== # cephadm bootstrap --mon-ip 10.0.209.126 --registry-url cp.icr.io/cp --registry-username cp --registry-password <> --yes-i-know This is a development version of cephadm. For information regarding the latest stable release: https://docs.ceph.com/docs/squid/cephadm/install Verifying podman|docker is present... Verifying lvm2 is present... Verifying time synchronization is in place... Unit chronyd.service is enabled and running Repeating the final host check... podman (/usr/bin/podman) version 5.2.2 is present systemctl is present lvcreate is present Unit chronyd.service is enabled and running Host looks OK Cluster fsid: c40784d4-a823-11ef-8abf-fa163e0c47de Verifying IP 10.0.209.126 port 3300 ... Verifying IP 10.0.209.126 port 6789 ... Mon IP `10.0.209.126` is in CIDR network `10.0.208.0/22` Mon IP `10.0.209.126` is in CIDR network `10.0.208.0/22` Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network Logging into custom registry. Pulling container image cp.icr.io/cp/ibm-ceph/ceph-8-rhel9:latest... Ceph version: ceph version 19.2.0-53.el9cp (677d8728b1c91c14d54eedf276ac61de636606f8) squid (stable) Extracting ceph user uid/gid from container image... Creating initial keys... Creating initial monmap... Creating mon... Waiting for mon to start... Waiting for mon... mon is available Assimilating anything we can from ceph.conf... Generating new minimal ceph.conf... Restarting the monitor... Setting public_network to 10.0.208.0/22 in global config section Wrote config to /etc/ceph/ceph.conf Wrote keyring to /etc/ceph/ceph.client.admin.keyring Creating mgr... Verifying port 0.0.0.0:9283 ... Verifying port 0.0.0.0:8765 ... Verifying port 0.0.0.0:8443 ... Waiting for mgr to start... Waiting for mgr... mgr not available, waiting (1/15)... mgr not available, waiting (2/15)... mgr not available, waiting (3/15)... mgr is available Enabling cephadm module... Waiting for the mgr to restart... Waiting for mgr epoch 5... mgr epoch 5 is available Waiting for orchestrator module... orchestrator module is available Setting orchestrator backend to cephadm... Generating ssh key... Wrote public SSH key to /etc/ceph/ceph.pub Adding key to root@localhost authorized_keys... Adding host mobisht-rhel9-live... Deploying mon service with default placement... Deploying mgr service with default placement... Deploying crash service with default placement... Deploying ceph-exporter service with default placement... Deploying prometheus service with default placement... Deploying grafana service with default placement... Deploying node-exporter service with default placement... Deploying alertmanager service with default placement... Enabling the dashboard module... Waiting for the mgr to restart... Waiting for mgr epoch 9... mgr epoch 9 is available Waiting for orchestrator module... orchestrator module is available Using certmgr to generate dashboard self-signed certificate... Creating initial admin user... Fetching dashboard port number... Ceph Dashboard is now available at: URL: https://mobisht-rhel9-live:8443/ User: admin Password: exw5u6i9qj Enabling client.admin keyring and conf on hosts with "admin" label Saving cluster configuration to /var/lib/ceph/c40784d4-a823-11ef-8abf-fa163e0c47de/config directory Skipping call home integration. --enable-ibm-call-home not provided You can access the Ceph CLI as following in case of multi-cluster or non-default config: sudo /usr/sbin/cephadm shell --fsid c40784d4-a823-11ef-8abf-fa163e0c47de -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring Or, if you are only running a single cluster on this host: sudo /usr/sbin/cephadm shell Please consider enabling telemetry to help improve Ceph: ceph telemetry on For more information see: https://docs.ceph.com/en/latest/mgr/telemetry/ Bootstrap complete. Version-Release number of selected component (if applicable): ============================================================= 19.2.0-53 How reproducible: Always Steps to Reproduce: 1. Bootstrap with default image Actual results: Bootstrap completed with log message (This is a development version of cephadm.) Expected results: Log message should not contain "This is a development version of cephadm" Additional info: Repo: https://public.dhe.ibm.com/ibmdl/export/pub/storage/ceph/ibm-storage-ceph-8-rhel-9.repo
reef and earlier have DEFAULT_IMAGE_IS_MAIN = False squid has DEFAULT_IMAGE_IS_MAIN = True https://github.com/ceph/ceph/blob/squid/src/cephadm/cephadmlib/constants.py
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage 8.0 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:10956