Bug 2249518

Summary: Add fields to ceph-nvmeof conf
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Aviv Caro <acaro>
Component: CephadmAssignee: Adam King <adking>
Status: CLOSED ERRATA QA Contact: Rahul Lepakshi <rlepaksh>
Severity: high Docs Contact: Akash Raj <akraj>
Priority: unspecified    
Version: 7.1CC: adking, akraj, cephqe-warriors, saraut, tserlin
Target Milestone: ---   
Target Release: 7.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-18.2.1-100.el9cp Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-06-13 14:23:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2267614, 2298578, 2298579    

Description Aviv Caro 2023-11-13 16:03:52 UTC
See https://tracker.ceph.com/issues/63343

Comment 4 Rahul Lepakshi 2024-04-29 06:24:28 UTC
Verified

ceph-nvmeof.conf looks like this with latest downstream builds

# cat ceph-nvmeof.conf
# This file is generated by cephadm.
[gateway]
name = client.nvmeof.nvmeof_pool.ceph-ibm-ha-v2-81ocsz-node4.tulxkw
group =
addr = 10.0.211.180
port = 5500
enable_auth = False
state_update_notify = True
state_update_interval_sec = 5
enable_spdk_discovery_controller = False
enable_prometheus_exporter = True
prometheus_exporter_ssl = False
prometheus_port = 10008
verify_nqns = True
omap_file_lock_duration = 60
omap_file_lock_retries = 15
omap_file_lock_retry_sleep_interval = 5
omap_file_update_reloads = 10

[gateway-logs]
log_level = INFO
log_files_enabled = True
log_files_rotation_enabled = True
verbose_log_messages = True
max_log_file_size_in_mb = 10
max_log_files_count = 20
max_log_directory_backups = 10
log_directory = /var/log/ceph/

[discovery]
addr = 10.0.211.180
port = 8009

[ceph]
pool = nvmeof_pool
config_file = /etc/ceph/ceph.conf
id = nvmeof.nvmeof_pool.ceph-ibm-ha-v2-81ocsz-node4.tulxkw

[mtls]
server_key = ./server.key
client_key = ./client.key
server_cert = ./server.crt
client_cert = ./client.crt

[spdk]
tgt_path = /usr/local/bin/nvmf_tgt
rpc_socket_dir = /var/tmp/
rpc_socket_name = spdk.sock
timeout = 60.0
bdevs_per_cluster = 32
log_level = WARNING
conn_retries = 10
transports = tcp
transport_tcp_options = {"in_capsule_data_size": 8192, "max_io_qpairs_per_ctrlr": 7}
tgt_cmd_extra_args = --cpumask=0xF

[monitor]
timeout = 1.0

Comment 7 errata-xmlrpc 2024-06-13 14:23:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:3925

Comment 8 Red Hat Bugzilla 2024-11-16 04:25:13 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days