Bug 2249518 - Add fields to ceph-nvmeof conf
Summary: Add fields to ceph-nvmeof conf
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 7.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 7.1
Assignee: Adam King
QA Contact: Rahul Lepakshi
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2267614 2298578 2298579
TreeView+ depends on / blocked
 
Reported: 2023-11-13 16:03 UTC by Aviv Caro
Modified: 2024-11-16 04:25 UTC (History)
5 users (show)

Fixed In Version: ceph-18.2.1-100.el9cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-06-13 14:23:14 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-7899 0 None None None 2023-11-13 16:05:30 UTC
Red Hat Product Errata RHSA-2024:3925 0 None None None 2024-06-13 14:23:20 UTC

Description Aviv Caro 2023-11-13 16:03:52 UTC
See https://tracker.ceph.com/issues/63343

Comment 4 Rahul Lepakshi 2024-04-29 06:24:28 UTC
Verified

ceph-nvmeof.conf looks like this with latest downstream builds

# cat ceph-nvmeof.conf
# This file is generated by cephadm.
[gateway]
name = client.nvmeof.nvmeof_pool.ceph-ibm-ha-v2-81ocsz-node4.tulxkw
group =
addr = 10.0.211.180
port = 5500
enable_auth = False
state_update_notify = True
state_update_interval_sec = 5
enable_spdk_discovery_controller = False
enable_prometheus_exporter = True
prometheus_exporter_ssl = False
prometheus_port = 10008
verify_nqns = True
omap_file_lock_duration = 60
omap_file_lock_retries = 15
omap_file_lock_retry_sleep_interval = 5
omap_file_update_reloads = 10

[gateway-logs]
log_level = INFO
log_files_enabled = True
log_files_rotation_enabled = True
verbose_log_messages = True
max_log_file_size_in_mb = 10
max_log_files_count = 20
max_log_directory_backups = 10
log_directory = /var/log/ceph/

[discovery]
addr = 10.0.211.180
port = 8009

[ceph]
pool = nvmeof_pool
config_file = /etc/ceph/ceph.conf
id = nvmeof.nvmeof_pool.ceph-ibm-ha-v2-81ocsz-node4.tulxkw

[mtls]
server_key = ./server.key
client_key = ./client.key
server_cert = ./server.crt
client_cert = ./client.crt

[spdk]
tgt_path = /usr/local/bin/nvmf_tgt
rpc_socket_dir = /var/tmp/
rpc_socket_name = spdk.sock
timeout = 60.0
bdevs_per_cluster = 32
log_level = WARNING
conn_retries = 10
transports = tcp
transport_tcp_options = {"in_capsule_data_size": 8192, "max_io_qpairs_per_ctrlr": 7}
tgt_cmd_extra_args = --cpumask=0xF

[monitor]
timeout = 1.0

Comment 7 errata-xmlrpc 2024-06-13 14:23:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:3925

Comment 8 Red Hat Bugzilla 2024-11-16 04:25:13 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.