Bug 2247237 - mgr/cephadm: default nvmeof cpumask + fixing tgt_cmd_extra_args quoting
Summary: mgr/cephadm: default nvmeof cpumask + fixing tgt_cmd_extra_args quoting
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 7.0
Hardware: All
OS: All
unspecified
urgent
Target Milestone: ---
: 7.0
Assignee: Adam King
QA Contact: Mohit Bisht
Rivka Pollack
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-10-31 13:32 UTC by Aviv Caro
Modified: 2023-12-13 15:24 UTC (History)
6 users (show)

Fixed In Version: ceph-18.2.0-110.el9cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-12-13 15:24:44 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-7836 0 None None None 2023-10-31 13:33:22 UTC
Red Hat Product Errata RHBA-2023:7780 0 None None None 2023-12-13 15:24:47 UTC

Description Aviv Caro 2023-10-31 13:32:16 UTC
Description of problem: we need to set the default cpumask to 0xF (right now it is 0x1). There is no risk in this change, and it will allow much better performance when using the nvmeof GW in this release (TP).

Comment 5 Rahul Lepakshi 2023-11-07 04:38:40 UTC
I will be verifying this BZ today

Comment 6 Rahul Lepakshi 2023-11-07 16:01:01 UTC
There are 4 reactors by default on deploying GW, moving this BZ to verified

# /usr/libexec/spdk/scripts/rpc.py framework_get_reactors
{
  "tick_rate": 2290000000,
  "reactors": [
    {
      "lcore": 0,
      "busy": 54049518970,
      "idle": 47576334009680,
      "in_interrupt": false,
      "lw_threads": [
        {
          "name": "app_thread",
          "id": 1,
          "cpumask": "1",
          "elapsed": 47630385897572
        },
        {
          "name": "nvmf_tgt_poll_group_3",
          "id": 5,
          "cpumask": "f",
          "elapsed": 47630219243592
        }
      ]
    },
    {
      "lcore": 1,
      "busy": 16436574,
      "idle": 47630367784788,
      "in_interrupt": false,
      "lw_threads": [
        {
          "name": "nvmf_tgt_poll_group_0",
          "id": 2,
          "cpumask": "f",
          "elapsed": 47630220968764
        }
      ]
    },
    {
      "lcore": 2,
      "busy": 1232812,
      "idle": 47630382527836,
      "in_interrupt": false,
      "lw_threads": [
        {
          "name": "nvmf_tgt_poll_group_1",
          "id": 3,
          "cpumask": "f",
          "elapsed": 47630220897268
        }
      ]
    },
    {
      "lcore": 3,
      "busy": 1129014,
      "idle": 47630382570572,
      "in_interrupt": false,
      "lw_threads": [
        {
          "name": "nvmf_tgt_poll_group_2",
          "id": 4,
          "cpumask": "f",
          "elapsed": 47630220076266
        }
      ]
    }
  ]
}


Conf file created by cephadm
[root@ceph-1sunilkumar-4q4o0k-node6 app]# cat ceph-nvmeof.conf
# This file is generated by cephadm.
[gateway]
name = client.nvmeof.rbd.ceph-1sunilkumar-4q4o0k-node6.uwdvqd
group = None
addr = 10.0.211.237
port = 5500
enable_auth = False
state_update_notify = True
state_update_interval_sec = 5
enable_spdk_discovery_controller = true

[ceph]
pool = rbd
config_file = /etc/ceph/ceph.conf
id = nvmeof.rbd.ceph-1sunilkumar-4q4o0k-node6.uwdvqd

[mtls]
server_key = ./server.key
client_key = ./client.key
server_cert = ./server.crt
client_cert = ./client.crt

[spdk]
tgt_path = /usr/local/bin/nvmf_tgt
rpc_socket = /var/tmp/spdk.sock
timeout = 60
log_level = WARN
conn_retries = 10
transports = tcp
transport_tcp_options = {"in_capsule_data_size": 8192, "max_io_qpairs_per_ctrlr": 7}
tgt_cmd_extra_args = --cpumask=0xF

GW_log -
Nov 07 05:13:43 ceph-1sunilkumar-4q4o0k-node6 systemd[1]: Starting Ceph nvmeof.rbd.ceph-1sunilkumar-4q4o0k-node6.uwdvqd for 69725e72-7d40-11ee-844a-fa163e7c0127...
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 podman[35452]:
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 podman[35452]: 2023-11-07 05:13:44.099364292 -0500 EST m=+0.035597795 container create 39702f699a6697f5568a96bcfdc0d7db51227edfdff31d0654f9efb30ad0a0ae (image=registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof:0.0.5-1, name=ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd, version=0.0.5, name=ceph-nvmeof, io.k8s.description=Ceph NVMe over Fabrics Gateway, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ceph-nvmeof/images/0.0.5-1, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=53cedde026dbd5f359ab38d3ddb1fca43c2284a7, com.redhat.component=ceph-nvmeof-container, summary=Service to provide block storage on top of Ceph for platforms (e.g.: VMWare) without native Ceph support (RBD), replacing existing approaches (iSCSI) with a newer and more versatile standard (NVMe-oF)., io.openshift.expose-services=, build-date=2023-10-31T14:36:28, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Alexander Indenbaum <aindenba>, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.29.0, description=Ceph NVMe over Fabrics Gateway, release=1)
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 podman[35452]: 2023-11-07 05:13:44.133545241 -0500 EST m=+0.069778744 container init 39702f699a6697f5568a96bcfdc0d7db51227edfdff31d0654f9efb30ad0a0ae (image=registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof:0.0.5-1, name=ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd, release=1, vendor=Red Hat, Inc., summary=Service to provide block storage on top of Ceph for platforms (e.g.: VMWare) without native Ceph support (RBD), replacing existing approaches (iSCSI) with a newer and more versatile standard (NVMe-oF)., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ceph-nvmeof/images/0.0.5-1, build-date=2023-10-31T14:36:28, vcs-ref=53cedde026dbd5f359ab38d3ddb1fca43c2284a7, architecture=x86_64, name=ceph-nvmeof, com.redhat.component=ceph-nvmeof-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, version=0.0.5, io.k8s.description=Ceph NVMe over Fabrics Gateway, vcs-type=git, io.buildah.version=1.29.0, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=Ceph NVMe over Fabrics Gateway, maintainer=Alexander Indenbaum <aindenba>, io.openshift.tags=minimal rhel9)
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 podman[35452]: 2023-11-07 05:13:44.141313476 -0500 EST m=+0.077546998 container start 39702f699a6697f5568a96bcfdc0d7db51227edfdff31d0654f9efb30ad0a0ae (image=registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof:0.0.5-1, name=ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd, com.redhat.component=ceph-nvmeof-container, summary=Service to provide block storage on top of Ceph for platforms (e.g.: VMWare) without native Ceph support (RBD), replacing existing approaches (iSCSI) with a newer and more versatile standard (NVMe-oF)., architecture=x86_64, io.openshift.tags=minimal rhel9, io.buildah.version=1.29.0, build-date=2023-10-31T14:36:28, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, maintainer=Alexander Indenbaum <aindenba>, vendor=Red Hat, Inc., description=Ceph NVMe over Fabrics Gateway, io.k8s.description=Ceph NVMe over Fabrics Gateway, version=0.0.5, name=ceph-nvmeof, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ceph-nvmeof/images/0.0.5-1, vcs-ref=53cedde026dbd5f359ab38d3ddb1fca43c2284a7, release=1, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 bash[35452]: 39702f699a6697f5568a96bcfdc0d7db51227edfdff31d0654f9efb30ad0a0ae
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 podman[35452]: 2023-11-07 05:13:44.08534474 -0500 EST m=+0.021578242 image pull  registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof:0.0.5-1
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 systemd[1]: Started Ceph nvmeof.rbd.ceph-1sunilkumar-4q4o0k-node6.uwdvqd for 69725e72-7d40-11ee-844a-fa163e7c0127.
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.server:Starting gateway client.nvmeof.rbd.ceph-1sunilkumar-4q4o0k-node6.uwdvqd
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: DEBUG:control.server:Starting serve
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: DEBUG:control.server:Configuring server client.nvmeof.rbd.ceph-1sunilkumar-4q4o0k-node6.uwdvqd
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.server:SPDK Target Path: /usr/local/bin/nvmf_tgt
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.server:SPDK Socket: /var/tmp/spdk.sock
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.server:Starting /usr/local/bin/nvmf_tgt -u -r /var/tmp/spdk.sock --cpumask=0xF
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.server:Attempting to initialize SPDK: rpc_socket: /var/tmp/spdk.sock, conn_retries: 300, timeout: 60.0
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO: Setting log level to WARN
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:JSONRPCClient(/var/tmp/spdk.sock):Setting log level to WARN
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: [2023-11-07 10:13:44.461431] Starting SPDK v23.01.1 / DPDK 22.11.0 initialization...
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: [2023-11-07 10:13:44.461717] [ DPDK EAL parameters: nvmf --no-shconf -c 0xF --no-pci --huge-unlink --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3 ]
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: TELEMETRY: No legacy callbacks, legacy socket not created
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: [2023-11-07 10:13:44.589479] app.c: 712:spdk_app_start: *NOTICE*: Total cores available: 4
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: [2023-11-07 10:13:44.636928] reactor.c: 926:reactor_run: *NOTICE*: Reactor started on core 1
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: [2023-11-07 10:13:44.637111] reactor.c: 926:reactor_run: *NOTICE*: Reactor started on core 2
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: [2023-11-07 10:13:44.637145] reactor.c: 926:reactor_run: *NOTICE*: Reactor started on core 3
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: [2023-11-07 10:13:44.637147] reactor.c: 926:reactor_run: *NOTICE*: Reactor started on core 0
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: [2023-11-07 10:13:44.684216] accel_sw.c: 681:sw_accel_module_init: *NOTICE*: Accel framework software module initialized.
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: DEBUG:control.server:create_transport: tcp options: {"in_capsule_data_size": 8192, "max_io_qpairs_per_ctrlr": 7}
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: [2023-11-07 10:13:44.815499] tcp.c: 629:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.server:Using SPDK discovery service
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.state:First gateway: created object nvmeof.None.state
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:Using configuration file ceph-nvmeof.conf
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:Configuration file content:
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:============================================================================
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:# This file is generated by cephadm.
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:[gateway]
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:name = client.nvmeof.rbd.ceph-1sunilkumar-4q4o0k-node6.uwdvqd
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:group = None
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:addr = 10.0.211.237
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:port = 5500
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:enable_auth = False
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:state_update_notify = True
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:state_update_interval_sec = 5
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:enable_spdk_discovery_controller = true
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:[ceph]
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:pool = rbd
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:config_file = /etc/ceph/ceph.conf
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:id = nvmeof.rbd.ceph-1sunilkumar-4q4o0k-node6.uwdvqd
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:[mtls]
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:server_key = ./server.key
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:client_key = ./client.key
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:server_cert = ./server.crt
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:client_cert = ./client.crt
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:[spdk]
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:tgt_path = /usr/local/bin/nvmf_tgt
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:rpc_socket = /var/tmp/spdk.sock
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:timeout = 60
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:log_level = WARN
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:conn_retries = 10
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:transports = tcp
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:transport_tcp_options = {"in_capsule_data_size": 8192, "max_io_qpairs_per_ctrlr": 7}
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:tgt_cmd_extra_args = --cpumask=0xF
Nov 07 05:13:44 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:============================================================================
Nov 07 08:29:45 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:Received request to create subsystem nqn.2016-06.io.spdk:cnode2
Nov 07 08:29:45 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:create_subsystem nqn.2016-06.io.spdk:cnode2: True
Nov 07 08:29:45 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: DEBUG:control.state:omap_key generated: subsystem_nqn.2016-06.io.spdk:cnode2
Nov 07 08:30:24 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:Received request to create client.nvmeof.rbd.ceph-1sunilkumar-4q4o0k-node6.uwdvqd TCP listener for nqn.2016-06.io.spdk:cnode2 at 10.0.211.237:5002.
Nov 07 08:30:24 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: [2023-11-07 13:30:24.806332] tcp.c: 850:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.211.237 port 5002 ***
Nov 07 08:30:24 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:create_listener: True
Nov 07 08:30:24 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: DEBUG:control.state:omap_key generated: listener_nqn.2016-06.io.spdk:cnode2_client.nvmeof.rbd.ceph-1sunilkumar-4q4o0k-node6.uwdvqd_TCP_10.0.211.237_5002
Nov 07 08:30:42 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:Received request to allow any host to nqn.2016-06.io.spdk:cnode2
Nov 07 08:30:42 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:add_host *: True
Nov 07 08:30:42 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: DEBUG:control.state:omap_key generated: host_nqn.2016-06.io.spdk:cnode2_*
Nov 07 08:31:26 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:Received request to create bdev 3CZF-bdev0 from rbd/3CZF-image0 with block size 512
Nov 07 08:31:26 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:Allocating cluster name='cluster_context_0'
Nov 07 08:31:26 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: [2023-11-07 13:31:26.703399] bdev_rbd.c:1199:bdev_rbd_create: *NOTICE*: Add 3CZF-bdev0 rbd disk to lun
Nov 07 08:31:26 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:create_bdev: 3CZF-bdev0
Nov 07 08:31:26 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: DEBUG:control.state:omap_key generated: bdev_3CZF-bdev0
Nov 07 08:31:50 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:Received request to add 3CZF-bdev0 to nqn.2016-06.io.spdk:cnode2
Nov 07 08:31:50 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:add_namespace: 1
Nov 07 08:31:50 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: DEBUG:control.state:omap_key generated: namespace_nqn.2016-06.io.spdk:cnode2_1
Nov 07 08:33:35 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: [2023-11-07 13:33:35.210897] subsystem.c:1201:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.211.237/5002, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
Nov 07 09:50:49 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:Received request to get subsystems
Nov 07 09:50:49 ceph-1sunilkumar-4q4o0k-node6 ceph-69725e72-7d40-11ee-844a-fa163e7c0127-nvmeof-rbd-ceph-1sunilkumar-4q4o0k-node6-uwdvqd[35463]: INFO:control.grpc:get_subsystems: [{'nqn': 'nqn.2014-08.org.nvmexpress.discovery', 'subtype': 'Discovery', 'listen_addresses': [], 'allow_any_host': True, 'hosts': []}, {'nqn': 'nqn.2016-06.io.spdk:cnode2', 'subtype': 'NVMe', 'listen_addresses': [{'transport': 'TCP', 'trtype': 'TCP', 'adrfam': 'IPv4', 'traddr': '10.0.211.237', 'trsvcid': '5002'}], 'allow_any_host': True, 'hosts': [], 'serial_number': '2', 'model_number': 'SPDK bdev Controller', 'max_namespaces': 32, 'min_cntlid': 1, 'max_cntlid': 65519, 'namespaces': [{'nsid': 1, 'bdev_name': '3CZF-bdev0', 'name': '3CZF-bdev0', 'nguid': '694D5AAB42F54BDA9837993512D99229', 'uuid': '694d5aab-42f5-4bda-9837-993512d99229'}]}]

Comment 7 Rahul Lepakshi 2023-11-07 16:02:24 UTC
Verified at below version

[ceph: root@ceph-1sunilkumar-4q4o0k-node1-installer /]# ceph version
ceph version 18.2.0-117.el9cp (7e71aaeb77dd63a7bf8cc3f39dd69b7d151298b0) reef (stable)

Comment 8 Rahul Lepakshi 2023-11-08 05:20:34 UTC
@aviv what is expectation on " + fixing tgt_cmd_extra_args quoting" as in the title? 

Here I can see below in ceph-nvmeof.conf file, is this the expectation of quoting?
tgt_cmd_extra_args = --cpumask=0xF

# cat ceph-nvmeof.conf
# This file is generated by cephadm.
[gateway]
name = client.nvmeof.rbd.ceph-1sunilkumar-4q4o0k-node6.uwdvqd
group = None
addr = 10.0.211.237
port = 5500
enable_auth = False
state_update_notify = True
state_update_interval_sec = 5
enable_spdk_discovery_controller = true

[ceph]
pool = rbd
config_file = /etc/ceph/ceph.conf
id = nvmeof.rbd.ceph-1sunilkumar-4q4o0k-node6.uwdvqd

[mtls]
server_key = ./server.key
client_key = ./client.key
server_cert = ./server.crt
client_cert = ./client.crt

[spdk]
tgt_path = /usr/local/bin/nvmf_tgt
rpc_socket = /var/tmp/spdk.sock
timeout = 60
log_level = WARN
conn_retries = 10
transports = tcp
transport_tcp_options = {"in_capsule_data_size": 8192, "max_io_qpairs_per_ctrlr": 7}
tgt_cmd_extra_args = --cpumask=0xF

Comment 10 Aviv Caro 2023-11-21 10:24:01 UTC
I don't think there is anything to add to 7.0 RN, as this is fixed in 7.0.

Comment 13 errata-xmlrpc 2023-12-13 15:24:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:7780


Note You need to log in before you can comment on or make changes to this bug.