Description of problem: ======================== [RHCS][8.0z4][Deployment]Bootstrap failing with No module named 'cephadm.services.service_registry' # cephadm --image registry-proxy.engineering.redhat.com/rh-osbs/rhceph:8-456 bootstrap --mon-ip 10.0.65.104 Creating directory /etc/ceph for ceph.conf Verifying podman|docker is present... Verifying lvm2 is present... Verifying time synchronization is in place... Unit chronyd.service is enabled and running Repeating the final host check... podman (/usr/bin/podman) version 5.2.2 is present systemctl is present lvcreate is present Unit chronyd.service is enabled and running Host looks OK Cluster fsid: 7ec1e0f2-35f1-11f0-b824-fa163e866dc1 Verifying IP 10.0.65.104 port 3300 ... Verifying IP 10.0.65.104 port 6789 ... Mon IP `10.0.65.104` is in CIDR network `10.0.64.0/22` Mon IP `10.0.65.104` is in CIDR network `10.0.64.0/22` Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network Pulling container image registry-proxy.engineering.redhat.com/rh-osbs/rhceph:8-456... Ceph version: ceph version 19.2.0-135.el9cp (d94e9e9f67330aef0554e1588bc56394197d18bc) squid (stable) Extracting ceph user uid/gid from container image... Creating initial keys... Creating initial monmap... Creating mon... firewalld ready Enabling firewalld service ceph-mon in current zone... Waiting for mon to start... Waiting for mon... mon is available Assimilating anything we can from ceph.conf... Generating new minimal ceph.conf... Restarting the monitor... Setting public_network to 10.0.64.0/22 in global config section Wrote config to /etc/ceph/ceph.conf Wrote keyring to /etc/ceph/ceph.client.admin.keyring Creating mgr... Verifying port 0.0.0.0:9283 ... Verifying port 0.0.0.0:8765 ... Verifying port 0.0.0.0:8443 ... firewalld ready Enabling firewalld service ceph in current zone... firewalld ready Enabling firewalld port 9283/tcp in current zone... Enabling firewalld port 8765/tcp in current zone... Enabling firewalld port 8443/tcp in current zone... Waiting for mgr to start... Waiting for mgr... mgr not available, waiting (1/15)... mgr not available, waiting (2/15)... mgr is available Enabling cephadm module... Non-zero exit code 2 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=registry-proxy.engineering.redhat.com/rh-osbs/rhceph:8-456 -e NODE_NAME=ceph-mobisht-firewall-7rkrsd-node4 -v /var/log/ceph/7ec1e0f2-35f1-11f0-b824-fa163e866dc1:/var/log/ceph:z -v /tmp/ceph-tmpm8vhpgu1:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmp014wwa6d:/etc/ceph/ceph.conf:z registry-proxy.engineering.redhat.com/rh-osbs/rhceph:8-456 mgr module enable cephadm /usr/bin/ceph: stderr Error ENOENT: module 'cephadm' reports that it cannot run on the active manager daemon: No module named 'cephadm.services.service_registry' (pass --force to force enablement) RuntimeError: Failed command: /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=registry-proxy.engineering.redhat.com/rh-osbs/rhceph:8-456 -e NODE_NAME=ceph-mobisht-firewall-7rkrsd-node4 -v /var/log/ceph/7ec1e0f2-35f1-11f0-b824-fa163e866dc1:/var/log/ceph:z -v /tmp/ceph-tmpm8vhpgu1:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmp014wwa6d:/etc/ceph/ceph.conf:z registry-proxy.engineering.redhat.com/rh-osbs/rhceph:8-456 mgr module enable cephadm: Error ENOENT: module 'cephadm' reports that it cannot run on the active manager daemon: No module named 'cephadm.services.service_registry' (pass --force to force enablement) *************** Cephadm hit an issue during cluster installation. Current cluster files will be deleted automatically. To disable this behaviour you can pass the --no-cleanup-on-failure flag. In case of any previous broken installation, users must use the following command to completely delete the broken cluster: > cephadm rm-cluster --force --zap-osds --fsid <fsid> for more information please refer to https://docs.ceph.com/en/latest/cephadm/operations/#purging-a-cluster *************** Deleting cluster with fsid: 7ec1e0f2-35f1-11f0-b824-fa163e866dc1 Traceback (most recent call last): File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/usr/sbin/cephadm/__main__.py", line 5989, in <module> File "/usr/sbin/cephadm/__main__.py", line 5977, in main File "/usr/sbin/cephadm/__main__.py", line 2702, in _rollback File "/usr/sbin/cephadm/__main__.py", line 453, in _default_image File "/usr/sbin/cephadm/__main__.py", line 3060, in command_bootstrap File "/usr/sbin/cephadm/__main__.py", line 2441, in enable_cephadm_mgr_module File "/usr/sbin/cephadm/__main__.py", line 2978, in cli File "/usr/sbin/cephadm/cephadmlib/container_types.py", line 429, in run File "/usr/sbin/cephadm/cephadmlib/call_wrappers.py", line 307, in call_throws RuntimeError: Failed command: /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=registry-proxy.engineering.redhat.com/rh-osbs/rhceph:8-456 -e NODE_NAME=ceph-mobisht-firewall-7rkrsd-node4 -v /var/log/ceph/7ec1e0f2-35f1-11f0-b824-fa163e866dc1:/var/log/ceph:z -v /tmp/ceph-tmpm8vhpgu1:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmp014wwa6d:/etc/ceph/ceph.conf:z registry-proxy.engineering.redhat.com/rh-osbs/rhceph:8-456 mgr module enable cephadm: Error ENOENT: module 'cephadm' reports that it cannot run on the active manager daemon: No module named 'cephadm.services.service_registry' (pass --force to force enablement) Version-Release number of selected component (if applicable): ============================================================== 19.2.0-135 How reproducible: ================== Always Steps to Reproduce: ==================== 1.Bootstrap ceph cluster Actual results: ================ Bootstrap failing with below error mgr module enable cephadm: Error ENOENT: module 'cephadm' reports that it cannot run on the active manager daemon: No module named 'cephadm.services.service_registry' (pass --force to force enablement Expected results: ================= Bootstrap should passed Additional info:
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 8.0 bug fix updates), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2025:8259