See https://tracker.ceph.com/issues/63343
Verified ceph-nvmeof.conf looks like this with latest downstream builds # cat ceph-nvmeof.conf # This file is generated by cephadm. [gateway] name = client.nvmeof.nvmeof_pool.ceph-ibm-ha-v2-81ocsz-node4.tulxkw group = addr = 10.0.211.180 port = 5500 enable_auth = False state_update_notify = True state_update_interval_sec = 5 enable_spdk_discovery_controller = False enable_prometheus_exporter = True prometheus_exporter_ssl = False prometheus_port = 10008 verify_nqns = True omap_file_lock_duration = 60 omap_file_lock_retries = 15 omap_file_lock_retry_sleep_interval = 5 omap_file_update_reloads = 10 [gateway-logs] log_level = INFO log_files_enabled = True log_files_rotation_enabled = True verbose_log_messages = True max_log_file_size_in_mb = 10 max_log_files_count = 20 max_log_directory_backups = 10 log_directory = /var/log/ceph/ [discovery] addr = 10.0.211.180 port = 8009 [ceph] pool = nvmeof_pool config_file = /etc/ceph/ceph.conf id = nvmeof.nvmeof_pool.ceph-ibm-ha-v2-81ocsz-node4.tulxkw [mtls] server_key = ./server.key client_key = ./client.key server_cert = ./server.crt client_cert = ./client.crt [spdk] tgt_path = /usr/local/bin/nvmf_tgt rpc_socket_dir = /var/tmp/ rpc_socket_name = spdk.sock timeout = 60.0 bdevs_per_cluster = 32 log_level = WARNING conn_retries = 10 transports = tcp transport_tcp_options = {"in_capsule_data_size": 8192, "max_io_qpairs_per_ctrlr": 7} tgt_cmd_extra_args = --cpumask=0xF [monitor] timeout = 1.0
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:3925
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days