Bug 2240580 - [IPv6 compatibility] Unable to deploy nvmeof service on ceph cluster deployed using IPv6address
Summary: [IPv6 compatibility] Unable to deploy nvmeof service on ceph cluster deployed...
Keywords:
Status: ASSIGNED
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: NVMeOF
Version: 7.1
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 9.1
Assignee: Aviv Caro
QA Contact: JAYA PRAKASH P
ceph-doc-bot
URL:
Whiteboard:
Depends On:
Blocks: 2317218
TreeView+ depends on / blocked
 
Reported: 2023-09-25 11:03 UTC by Rahul Lepakshi
Modified: 2025-08-04 13:52 UTC (History)
8 users (show)

Fixed In Version: ceph-nvmeof-1.3.3
Doc Type: Release Note
Doc Text:
IPv6 is not supported for nvmeof GW deployment on 8.0
Clone Of:
Environment:
Last Closed:
Embargoed:
rlepaksh: needinfo-
rlepaksh: needinfo-
rlepaksh: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-7536 0 None None None 2023-09-25 11:06:47 UTC

Description Rahul Lepakshi 2023-09-25 11:03:26 UTC
Description of problem:
Unable to deploy nvmeof service on a ceph cluster bootstrapped using IPv6 address

ceph version 18.0.0-6366-g2015892b (2015892b5832dcdc27c53a96056485e6013c006e) reef (dev)

[root@cali005 ~]# podman pull quay.io/ceph/nvmeof:0.0.3
Trying to pull quay.io/ceph/nvmeof:0.0.3...
Getting image source signatures
Copying blob a4e7c653afb4 skipped: already exists
Copying blob 6989a1c886e5 skipped: already exists
Copying blob 06bba564ae5b skipped: already exists
Copying blob a841c30977c7 skipped: already exists
Copying blob 3715913301d6 skipped: already exists
Copying blob a6d315934e78 skipped: already exists
Copying blob b0bde5ce55ef skipped: already exists
Copying config 470dd4ee78 done
Writing manifest to image destination
Storing signatures
470dd4ee78f06cb4bea675003f5e9932a1c35f0a33b7fc19cadd2909cc02548a

[ceph: root@cali001 /]# ceph orch apply nvmeof rbd --placement="cali005"
Scheduled nvmeof.rbd update...

[ceph: root@cali001 /]# ceph orch ls
NAME                       PORTS             RUNNING  REFRESHED  AGE  PLACEMENT
alertmanager               ?:9093,9094           1/1  -          41m  count:1
ceph-exporter                                    5/5  -          41m  *
crash                                            5/5  -          41m  *
grafana                    ?:3000                1/1  -          41m  count:1
mgr                                              2/2  -          41m  count:2
mon                                              5/5  -          41m  count:5
node-exporter              ?:9100                5/5  -          41m  *
nvmeof.rbd                 ?:4420,5500,8009      0/1  -          16m  cali005
osd.all-available-devices                         25  -          25m  *
prometheus                 ?:9095                1/1  -          41m  count:1

[ceph: root@cali001 /]# ceph orch ps
mgr.cali001.ippkmp         cali001  *:9283,8765,8443  running (35m)          -  35m     513M        -  18.0.0-6366-g2015892b  1fc148641c22  bd746cba6dc9
mgr.cali004.qwkhtr         cali004  *:8443,9283,8765  running (29m)          -  29m     444M        -  18.0.0-6366-g2015892b  1fc148641c22  9c6e3c545046
mon.cali001                cali001                    running (35m)          -  35m    61.4M    2048M  18.0.0-6366-g2015892b  1fc148641c22  dec679b511a0
mon.cali004                cali004                    running (29m)          -  29m    58.7M    2048M  18.0.0-6366-g2015892b  1fc148641c22  f8ae67a07ac8
mon.cali005                cali005                    running (20m)          -  20m    47.8M    2048M  18.0.0-6366-g2015892b  1fc148641c22  795ec92a485d
mon.cali008                cali008                    running (19m)          -  19m    54.4M    2048M  18.0.0-6366-g2015892b  1fc148641c22  6612c20c4ea1
mon.cali010                cali010                    running (18m)          -  18m    41.2M    2048M  18.0.0-6366-g2015892b  1fc148641c22  1f4ffda87f09
nvmeof.rbd.cali005.oqcngh  cali005  *:5500,4420,8009  unknown                -  10m        -        -  <unknown>              <unknown>     <unknown>
osd.0                      cali005                    running (17m)          -  17m    47.8M    13.8G  18.0.0-6366-g2015892b  1fc148641c22  6c2e0c68ccc7
osd.1                      cali010                    running (17m)          -  17m    45.2M    16.8G  18.0.0-6366-g2015892b  1fc148641c22  98961119a4d5
osd.2                      cali008                    running (17m)          -  17m    52.8M    21.0G  18.0.0-6366-g2015892b  1fc148641c22  347059f36ba2
osd.3                      cali004                    running (18m)          -  18m    47.8M    26.7G  18.0.0-6366-g2015892b  1fc148641c22  7ccadf317be9
osd.4                      cali001                    running (17m)          -  17m    43.0M    11.0G  18.0.0-6366-g2015892b  1fc148641c22  674cd80f8a2c

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Bootstrap a ceph cluster with mon IP belonging to ipv6 address family 
2. Deploy mon, mgr and osd services - successful
3. Try pulling nvmeof images - successful
4. Try deploying nvmeof service with "ceph orch apply nvmeof rbd --placement="cali005""

Actual results: nvmeof service does not get deployed


Expected results: we should have IPv6 compatibility with nvmeof service


Additional info:

Comment 1 Aviv Caro 2023-09-26 15:23:02 UTC
Rahul, can you add some logs? What is failing?

Comment 2 Rahul Lepakshi 2023-09-27 06:21:02 UTC
Hi Aviv, 

Providing logs, Service is failing to bind to ipv6 address, we could move this BZ to cephadm component based IMO, please suggest

Journalctl --->
# journalctl -u ceph-46c53906-5b8c-11ee-8719-b49691cee384.cali005.oqcngh.service
Sep 25 10:40:08 cali005 systemd[1]: Starting Ceph nvmeof.rbd.cali005.oqcngh for 46c53906-5b8c-11ee-8719-b49691cee384...
Sep 25 10:40:08 cali005 bash[44624]: Trying to pull quay.io/ceph/nvmeof:0.0.3...
Sep 25 10:40:09 cali005 bash[44624]: Getting image source signatures
Sep 25 10:40:09 cali005 bash[44624]: Copying blob sha256:a4e7c653afb4c1c2c9ba57a7abf54ab4f496f6a4e2282546679d244a29350ae9
Sep 25 10:40:09 cali005 bash[44624]: Copying blob sha256:3715913301d65803af420e8191a52dcf9acdcd5b7a076a1c4eb784d76397424e
Sep 25 10:40:09 cali005 bash[44624]: Copying blob sha256:b0bde5ce55efe317b70dd6807c808fbc69a2f26b3e4e9eec5745d76d32a2f6fe
Sep 25 10:40:09 cali005 bash[44624]: Copying blob sha256:06bba564ae5b7f33b8ce87b1920e373db36a1af10a74abd7860688c896c584f5
Sep 25 10:40:09 cali005 bash[44624]: Copying blob sha256:a841c30977c7a8f1922fd3c582ebf003bb73ce84c0fd9616f85b653839d1bafe
Sep 25 10:40:09 cali005 bash[44624]: Copying blob sha256:6989a1c886e5ac18b6267e87f425875ae7362274da6133a1bd6d7f8b82b1f5cb
Sep 25 10:40:09 cali005 bash[44624]: Copying blob sha256:a6d315934e783fe9aa44b70a2e10fc1d1b5b4e9b66e5e79765d6a38fd2159b64
Sep 25 10:40:15 cali005 bash[44624]: Copying config sha256:470dd4ee78f06cb4bea675003f5e9932a1c35f0a33b7fc19cadd2909cc02548a
Sep 25 10:40:15 cali005 bash[44624]: Writing manifest to image destination
Sep 25 10:40:15 cali005 bash[44624]: Storing signatures
Sep 25 10:40:15 cali005 podman[44624]:
Sep 25 10:40:15 cali005 podman[44624]: 2023-09-25 10:40:15.460476409 +0000 UTC m=+7.099066097 container create cc4319266bd6aa706a202f5ee80c57f5dc1662edd0560b78306487c072b700bd (image=quay.io/ceph/nvmeof:0.0.3>
Sep 25 10:40:15 cali005 podman[44624]: 2023-09-25 10:40:08.382438777 +0000 UTC m=+0.021028466 image pull  quay.io/ceph/nvmeof:0.0.3
Sep 25 10:40:15 cali005 podman[44624]: 2023-09-25 10:40:15.50866632 +0000 UTC m=+7.147256007 container init cc4319266bd6aa706a202f5ee80c57f5dc1662edd0560b78306487c072b700bd (image=quay.io/ceph/nvmeof:0.0.3, n>
Sep 25 10:40:15 cali005 podman[44624]: 2023-09-25 10:40:15.511719786 +0000 UTC m=+7.150309481 container start cc4319266bd6aa706a202f5ee80c57f5dc1662edd0560b78306487c072b700bd (image=quay.io/ceph/nvmeof:0.0.3,>
Sep 25 10:40:15 cali005 bash[44624]: cc4319266bd6aa706a202f5ee80c57f5dc1662edd0560b78306487c072b700bd
Sep 25 10:40:15 cali005 systemd[1]: Started Ceph nvmeof.rbd.cali005.oqcngh for 46c53906-5b8c-11ee-8719-b49691cee384.
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: INFO:control.server:Starting gateway client.nvmeof.rbd.cali005.oqcngh
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: DEBUG:control.server:Starting serve
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: DEBUG:control.server:Configuring server client.nvmeof.rbd.cali005.oqcngh
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: INFO:control.server:SPDK Target Path: /usr/local/bin/nvmf_tgt
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: INFO:control.server:SPDK Socket: /var/tmp/spdk.sock
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: INFO:control.server:Starting /usr/local/bin/nvmf_tgt -u -r /var/tmp/spdk.sock
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: INFO:control.server:Attempting to initialize SPDK: rpc_socket: /var/tmp/spdk.sock, conn_retries: 300, timeou>
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: INFO: Setting log level to WARN
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: INFO:JSONRPCClient(/var/tmp/spdk.sock):Setting log level to WARN
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: [2023-09-25 10:40:15.720280] Starting SPDK v23.01.1 / DPDK 22.11.1 initialization...
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: [2023-09-25 10:40:15.720343] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --no-pci --huge-unlink --log-lev>
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: TELEMETRY: No legacy callbacks, legacy socket not created
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: [2023-09-25 10:40:15.830386] app.c: 712:spdk_app_start: *NOTICE*: Total cores available: 1
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: [2023-09-25 10:40:15.871749] reactor.c: 926:reactor_run: *NOTICE*: Reactor started on core 0
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: [2023-09-25 10:40:15.903814] accel_sw.c: 681:sw_accel_module_init: *NOTICE*: Accel framework software module>
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: DEBUG:control.server:create_transport: tcp options: {"in_capsule_data_size": 8192, "max_io_qpairs_per_ctrlr">
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: [2023-09-25 10:40:16.114947] tcp.c: 629:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: INFO:control.state:First gateway: created object nvmeof.state
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: E0925 10:40:16.156623077       2 chttp2_server.cc:1045]                UNKNOWN:Name or service not known {gr>
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: ERROR:control.server:GatewayServer exception occurred:
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: Traceback (most recent call last):
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]:   File "/src/control/__main__.py", line 35, in <module>
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]:     gateway.serve()
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]:   File "/src/control/server.py", line 108, in serve
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]:     self._add_server_listener()
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]:   File "/src/control/server.py", line 146, in _add_server_listener
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]:     self.server.add_insecure_port("{}:{}".format(
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]:   File "/src/__pypackages__/3.9/lib/grpc/_server.py", line 1101, in add_insecure_port
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]:     return _common.validate_port_binding_result(
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]:   File "/src/__pypackages__/3.9/lib/grpc/_common.py", line 175, in validate_port_binding_result
...skipping...
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: [2023-09-25 10:41:01.502870] app.c: 712:spdk_app_start: *NOTICE*: Total cores available: 1
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: [2023-09-25 10:41:01.539937] reactor.c: 926:reactor_run: *NOTICE*: Reactor started on core 0
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: [2023-09-25 10:41:01.569850] accel_sw.c: 681:sw_accel_module_init: *NOTICE*: Accel framework software module>
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: DEBUG:control.server:create_transport: tcp options: {"in_capsule_data_size": 8192, "max_io_qpairs_per_ctrlr">
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: [2023-09-25 10:41:01.588555] tcp.c: 629:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: INFO:control.state:nvmeof.state omap object already exists.
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: E0925 10:41:01.630135302       2 chttp2_server.cc:1045]                UNKNOWN:Name or service not known {cr>
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: ERROR:control.server:GatewayServer exception occurred:
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: Traceback (most recent call last):
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/src/control/__main__.py", line 35, in <module>
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     gateway.serve()
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/src/control/server.py", line 108, in serve
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     self._add_server_listener()
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/src/control/server.py", line 146, in _add_server_listener
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     self.server.add_insecure_port("{}:{}".format(
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/src/__pypackages__/3.9/lib/grpc/_server.py", line 1101, in add_insecure_port
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     return _common.validate_port_binding_result(
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/src/__pypackages__/3.9/lib/grpc/_common.py", line 175, in validate_port_binding_result
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     raise RuntimeError(_ERROR_MESSAGE_PORT_BINDING_FAILED % address)
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: RuntimeError: Failed to bind to address 2620:52:0:880:b696:91ff:fece:e844:5500; set GRPC_VERBOSITY=debug env>
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: INFO:control.server:Terminating SPDK(client.nvmeof.rbd.cali005.oqcngh) pid 3...
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: INFO:control.server:Stopping the server...
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: INFO:control.server:Exiting the gateway process.
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: Traceback (most recent call last):
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     return _run_code(code, main_globals, None,
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     exec(code, run_globals)
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/src/control/__main__.py", line 35, in <module>
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     gateway.serve()
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/src/control/server.py", line 108, in serve
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     self._add_server_listener()
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/src/control/server.py", line 146, in _add_server_listener
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     self.server.add_insecure_port("{}:{}".format(
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/src/__pypackages__/3.9/lib/grpc/_server.py", line 1101, in add_insecure_port
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     return _common.validate_port_binding_result(
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/src/__pypackages__/3.9/lib/grpc/_common.py", line 175, in validate_port_binding_result
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     raise RuntimeError(_ERROR_MESSAGE_PORT_BINDING_FAILED % address)
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: RuntimeError: Failed to bind to address 2620:52:0:880:b696:91ff:fece:e844:5500; set GRPC_VERBOSITY=debug env>
Sep 25 10:41:01 cali005 podman[46774]: 2023-09-25 10:41:01.796895848 +0000 UTC m=+0.024335577 container died 110326e7efe571cbc6c26038aee94d9c7db6ba95bbccf8c09f1b6524f1bbc019 (image=quay.io/ceph/nvmeof:0.0.3, >
Sep 25 10:41:01 cali005 podman[46774]: 2023-09-25 10:41:01.808635559 +0000 UTC m=+0.036075281 container remove 110326e7efe571cbc6c26038aee94d9c7db6ba95bbccf8c09f1b6524f1bbc019 (image=quay.io/ceph/nvmeof:0.0.3>
Sep 25 10:41:01 cali005 systemd[1]: ceph-46c53906-5b8c-11ee-8719-b49691cee384.cali005.oqcngh.service: Main process exited, code=exited, status=1/FAILURE


ceph-nvmeof.conf file ---->
# This file is generated by cephadm.
[gateway]
name = client.nvmeof.rbd.cali005.oqcngh
group =
addr = 2620:52:0:880:b696:91ff:fece:e844
port = 5500
enable_auth = False
state_update_notify = True
state_update_interval_sec = 5

[ceph]
pool = rbd
config_file = /etc/ceph/ceph.conf
id = nvmeof.rbd.cali005.oqcngh

[mtls]
server_key = ./server.key
client_key = ./client.key
server_cert = ./server.crt
client_cert = ./client.crt

[spdk]
tgt_path = /usr/local/bin/nvmf_tgt
rpc_socket = /var/tmp/spdk.sock
timeout = 60
log_level = WARN
conn_retries = 10
transports = tcp
transport_tcp_options = {"in_capsule_data_size": 8192, "max_io_qpairs_per_ctrlr": 7}


ceph.conf file --->
# minimal ceph.conf for 46c53906-5b8c-11ee-8719-b49691cee384
[global]
        fsid = 46c53906-5b8c-11ee-8719-b49691cee384
        mon_host = [v2:[2620:52:0:880:b696:91ff:fece:e384]:3300/0,v1:[2620:52:0:880:b696:91ff:fece:e384]:6789/0] [v2:[2620:52:0:880:b696:91ff:fecd:dbae]:3300/0,v1:[2620:52:0:880:b696:91ff:fecd:dbae]:6789/0] [v2:[2620:52:0:880:b696:91ff:fece:e844]:3300/0,v1:[2620:52:0:880:b696:91ff:fece:e844]:6789/0] [v2:[2620:52:0:880:b696:91ff:fece:e01c]:3300/0,v1:[2620:52:0:880:b696:91ff:fece:e01c]:6789/0] [v2:[2620:52:0:880:b696:91ff:fece:e1b0]:3300/0,v1:[2620:52:0:880:b696:91ff:fece:e1b0]:6789/0]


ceph keyring file --->
[client.nvmeof.rbd.cali005.oqcngh]
key = AQCHYxFlHO7EFRAAHQgMlzfUufRrlZ7JmO5Kpg==

Comment 3 Aviv Caro 2023-10-03 07:22:43 UTC
This is handled in https://github.com/ceph/ceph-nvmeof/issues/247. But for TP we will probably won't have a solution. This should be listed as a limitation for 7.0.

Comment 4 Rahul Lepakshi 2023-10-04 03:45:36 UTC
Moving this to 7.1 as per above comment and https://ibm-systems-storage.slack.com/archives/C04QC5EGBPU/p1696352787558109?thread_ts=1696317891.811259&cid=C04QC5EGBPU

Comment 6 Aviv Caro 2024-01-21 13:56:55 UTC
Rahul can you try to reproduce on 0.0.7 and beyond. This should be fixed.

Comment 8 Ken Dreyer (Red Hat) 2024-01-30 15:09:42 UTC
we need 0.0.7 (or a newer version) downstream. https://pkgs.devel.redhat.com/cgit/containers/ceph-nvmeof/log/?h=ceph-7.0-rhel-9 has 0.0.5.

Comment 9 Aviv Caro 2024-04-02 19:18:37 UTC
IPv6 is not something we committed to support in 7.1.

Comment 12 Rahul Lepakshi 2024-10-11 08:34:07 UTC
Tested this BZ at below version - Fix fails moving back BZ to assigned as deployment succeeds but not CLI command works


[root@cali001 ~]# ceph version
ceph version 19.2.0-13.el9cp (03c59f30512cbb677ecb6e67e94566b8e8fe57e4) squid (stable)
[root@cali001 ~]# ceph config dump | grep nvmeof
mgr                   advanced  mgr/cephadm/container_image_nvmeof     cp.stg.icr.io/cp/ibm-ceph/nvmeof-rhel9:1.3.2-13


[root@cali001 ~]# ceph config dump | grep nvmeof
mgr                   advanced  mgr/cephadm/container_image_nvmeof     cp.stg.icr.io/cp/ibm-ceph/nvmeof-rhel9:1.3.2-13                                                                                                                    *
[root@cali001 ~]# ceph orch ls | grep nvmeof
nvmeof.nvmeof_pool.group1  ?:4420,5500,8009      8/8  10m ago    40m  cali003;cali004;cali005;cali006;cali007;cali008;cali009;cali010
[root@cali001 ~]# ceph orch ps | grep nvmeof
nvmeof.nvmeof_pool.group1.cali003.ppjqlo  cali003  *:5500,4420,8009  running (39m)    82s ago  39m     106M        -                   dde5543275f2  da008ad28586
nvmeof.nvmeof_pool.group1.cali004.otreck  cali004  *:5500,4420,8009  running (37m)    82s ago  37m     100M        -                   dde5543275f2  d3b980c22f37
nvmeof.nvmeof_pool.group1.cali005.jnxaaj  cali005  *:5500,4420,8009  running (38m)     7s ago  38m     106M        -                   dde5543275f2  9b0c96689e23
nvmeof.nvmeof_pool.group1.cali006.xbofuq  cali006  *:5500,4420,8009  running (37m)     6s ago  37m     100M        -                   dde5543275f2  939f461dcaed
nvmeof.nvmeof_pool.group1.cali007.ozeqph  cali007  *:5500,4420,8009  running (35m)     5m ago  35m     107M        -                   dde5543275f2  c47cf32a7fd9
nvmeof.nvmeof_pool.group1.cali008.vohkbm  cali008  *:5500,4420,8009  running (36m)    82s ago  36m     104M        -                   dde5543275f2  c0044954d8bc
nvmeof.nvmeof_pool.group1.cali009.vjmllf  cali009  *:5500,4420,8009  running (38m)    82s ago  38m     101M        -                   dde5543275f2  a98da94d83ec
nvmeof.nvmeof_pool.group1.cali010.jgiiip  cali010  *:5500,4420,8009  running (36m)    82s ago  36m     106M        -                   dde5543275f2  fd36b5fa1b4c

[root@cali001 ~]# ceph nvme-gw show nvmeof_pool group1
{
    "epoch": 9,
    "pool": "nvmeof_pool",
    "group": "group1",
    "features": "LB",
    "num gws": 8,
    "Anagrp list": "[ 1 2 3 4 5 6 7 8 ]",
    "num-namespaces": 0,
    "Created Gateways:": [
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.cali003.ppjqlo",
            "anagrp-id": 1,
            "num-namespaces": 0,
            "performed-full-startup": 1,
            "Availability": "CREATED",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY ,  6: STANDBY ,  7: STANDBY ,  8: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.cali004.otreck",
            "anagrp-id": 2,
            "num-namespaces": 0,
            "performed-full-startup": 1,
            "Availability": "CREATED",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY ,  6: STANDBY ,  7: STANDBY ,  8: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.cali005.jnxaaj",
            "anagrp-id": 3,
            "num-namespaces": 0,
            "performed-full-startup": 1,
            "Availability": "CREATED",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY ,  6: STANDBY ,  7: STANDBY ,  8: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.cali006.xbofuq",
            "anagrp-id": 4,
            "num-namespaces": 0,
            "performed-full-startup": 1,
            "Availability": "CREATED",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY ,  6: STANDBY ,  7: STANDBY ,  8: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.cali007.ozeqph",
            "anagrp-id": 5,
            "num-namespaces": 0,
            "performed-full-startup": 1,
            "Availability": "CREATED",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY ,  6: STANDBY ,  7: STANDBY ,  8: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.cali008.vohkbm",
            "anagrp-id": 6,
            "num-namespaces": 0,
            "performed-full-startup": 1,
            "Availability": "CREATED",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY ,  6: STANDBY ,  7: STANDBY ,  8: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.cali009.vjmllf",
            "anagrp-id": 7,
            "num-namespaces": 0,
            "performed-full-startup": 1,
            "Availability": "CREATED",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY ,  6: STANDBY ,  7: STANDBY ,  8: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.cali010.jgiiip",
            "anagrp-id": 8,
            "num-namespaces": 0,
            "performed-full-startup": 1,
            "Availability": "CREATED",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY ,  6: STANDBY ,  7: STANDBY ,  8: STANDBY "
        }
    ]
}




On GWs
[root@cali003 ~]# netstat -tunlp | grep 5500
tcp6       0      0 2620:52:0:880:b696:5500 :::*                    LISTEN      3703284/python3

ct 11 07:50:42 cali003 podman[3702362]: 2024-10-11 07:50:42.942157612 +0000 UTC m=+77.138366935 image pull dde5543275f2ea2e0fd73c752163c18a9450920f2ef928b2b98b4b98f030a485 cp.stg.icr.io/cp/ibm-ceph/nvmeof-rhel9:1.3.2-13
Oct 11 07:50:42 cali003 systemd[1]: Started Ceph nvmeof.nvmeof_pool.group1.cali003.ppjqlo for acfbbdf6-87a2-11ef-9879-b49691cee384.
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO utils.py:259 (2): Initialize gateway log level to "INFO"
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO utils.py:272 (2): Log files will be saved in /var/log/ceph/nvmeof-client.nvmeof.nvmeof_pool.group1.cali003.ppjqlo, using rotation
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:78 (2): Using NVMeoF gateway version 1.3.2
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:81 (2): Configured SPDK version 24.01.1
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:59 (2): Using configuration file ceph-nvmeof.conf
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:61 (2): ====================================== Configuration file content ======================================
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): # This file is generated by cephadm.
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): [gateway]
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): name = client.nvmeof.nvmeof_pool.group1.cali003.ppjqlo
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): group = group1
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): addr = 2620:52:0:880:b696:91ff:fece:e508
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): port = 5500
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): enable_auth = False
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): state_update_notify = True
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): state_update_interval_sec = 5
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): enable_spdk_discovery_controller = False
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): enable_prometheus_exporter = True
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): prometheus_exporter_ssl = False
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): prometheus_port = 10008
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): verify_nqns = True
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): omap_file_lock_duration = 20
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): omap_file_lock_retries = 30
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): omap_file_lock_retry_sleep_interval = 1.0
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): omap_file_update_reloads = 10
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): allowed_consecutive_spdk_ping_failures = 1
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): spdk_ping_interval_in_seconds = 2.0
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): ping_spdk_under_lock = False
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): enable_monitor_client = True
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2):
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): [gateway-logs]
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): log_level = INFO
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): log_files_enabled = True
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): log_files_rotation_enabled = True
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): verbose_log_messages = True
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): max_log_file_size_in_mb = 10
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): max_log_files_count = 20
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): max_log_directory_backups = 10
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): log_directory = /var/log/ceph/
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2):
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): [discovery]
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): addr = 2620:52:0:880:b696:91ff:fece:e508
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): port = 8009
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2):
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): [ceph]
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): pool = nvmeof_pool
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): config_file = /etc/ceph/ceph.conf
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): id = nvmeof.nvmeof_pool.group1.cali003.ppjqlo
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2):
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): [mtls]
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): server_key = /server.key
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): client_key = /client.key
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): server_cert = /server.cert
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): client_cert = /client.cert
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): root_ca_cert = /root.ca.cert
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2):
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): [spdk]
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): tgt_path = /usr/local/bin/nvmf_tgt
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): rpc_socket_dir = /var/tmp/
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): rpc_socket_name = spdk.sock
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): timeout = 60.0
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): bdevs_per_cluster = 32
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): log_level = WARNING
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): conn_retries = 10
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): transports = tcp
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): transport_tcp_options = {"in_capsule_data_size": 8192, "max_io_qpairs_per_ctrlr": 7}
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2):
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): [monitor]
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:65 (2): timeout = 1.0
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO config.py:66 (2): ========================================================================================================
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO server.py:111 (2): Starting gateway client.nvmeof.nvmeof_pool.group1.cali003.ppjqlo
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO server.py:172 (2): Starting serve, monitor client version: ceph version 19.2.0-13.el9cp (03c59f30512cbb677ecb6e67e94566b8e8fe57e4) squid (stable)
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO state.py:395 (2): First gateway: created object nvmeof.group1.state
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO server.py:275 (2): Starting /usr/bin/ceph-nvmeof-monitor-client --gateway-name client.nvmeof.nvmeof_pool.group1.cali003.ppjqlo --gateway-address [2620:52:0:880:b696:91ff:fece:e508]:5500 --gateway-pool nvmeof_pool --gateway-group group1 --monitor-group-address [2620:52:0:880:b696:91ff:fece:e508]:5499 -c /etc/ceph/ceph.conf -n client.nvmeof.nvmeof_pool.group1.cali003.ppjqlo -k /etc/ceph/keyring
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO server.py:279 (2): monitor client process id: 19
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:50:43] INFO server.py:161 (2): MonitorGroup server is listening on [2620:52:0:880:b696:91ff:fece:e508]:5499 for group id
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: 2024-10-11T07:50:43.272+0000 7f31c039b640  0 ms_deliver_dispatch: unhandled message 0x562325641180 nvmeofgwmap magic: 0 from mon.0 v2:[2620:52:0:880:b696:91ff:fece:e384]:3300/0
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: 2024-10-11T07:50:43.272+0000 7f31c039b640  0 ms_deliver_dispatch: unhandled message 0x56232562fd40 mon_map magic: 0 from mon.0 v2:[2620:52:0:880:b696:91ff:fece:e384]:3300/0
Oct 11 07:50:43 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: 2024-10-11T07:50:43.273+0000 7f31c039b640  0 ms_deliver_dispatch: unhandled message 0x562325641500 nvmeofgwmap magic: 0 from mon.0 v2:[2620:52:0:880:b696:91ff:fece:e384]:3300/0
Oct 11 07:54:12 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:12] INFO server.py:151 (2): Gateway client.nvmeof.nvmeof_pool.group1.cali003.ppjqlo group id=0
Oct 11 07:54:12 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:12] INFO server.py:164 (2): Stopping the MonitorGroup server...
Oct 11 07:54:12 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:12] INFO server.py:167 (2): The MonitorGroup gRPC server stopped...
Oct 11 07:54:12 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:12] INFO server.py:374 (2): SPDK Target Path: /usr/local/bin/nvmf_tgt
Oct 11 07:54:12 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:12] INFO server.py:390 (2): SPDK Socket: /var/tmp/spdk.sock
Oct 11 07:54:12 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:12] INFO server.py:403 (2): SPDK autodetecting cpu_mask: -m 0xF
Oct 11 07:54:12 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:12] INFO server.py:406 (2): Starting /usr/local/bin/nvmf_tgt -u -r /var/tmp/spdk.sock -m 0xF
Oct 11 07:54:14 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:14] INFO server.py:424 (2): SPDK process id: 397
Oct 11 07:54:14 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:14] INFO server.py:425 (2): Attempting to initialize SPDK: rpc_socket: /var/tmp/spdk.sock, conn_retries: 300, timeout: 60.0, log level: WARNING
Oct 11 07:54:14 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: INFO: Setting log level to WARNING
Oct 11 07:54:14 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:14] INFO client.py:110 (2): Setting log level to WARNING
Oct 11 07:54:14 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [2024-10-11 07:54:14.987697] Starting SPDK v24.01 / DPDK 23.11.0 initialization...
Oct 11 07:54:14 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [2024-10-11 07:54:14.987876] [ DPDK EAL parameters: nvmf --no-shconf -c 0xF --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid397 ]
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [2024-10-11 07:54:15.098881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [2024-10-11 07:54:15.179259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [2024-10-11 07:54:15.179274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [2024-10-11 07:54:15.179291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [2024-10-11 07:54:15.179295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [2024-10-11 07:54:15.352344] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:15] INFO server.py:465 (2): Started SPDK with version "SPDK v24.01"
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:15] INFO server.py:308 (2): Discovery service process id: 405
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:15] INFO server.py:303 (405): Starting ceph nvmeof discovery service
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:15] INFO cephutils.py:73 (2): Connected to Ceph with version "19.2.0-13.el9cp (03c59f30512cbb677ecb6e67e94566b8e8fe57e4) squid (stable)"
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:15] INFO state.py:398 (405): nvmeof.group1.state OMAP object already exists.
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:15] INFO grpc.py:176 (2): Requested huge pages count is 2048
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:15] INFO discovery.py:321 (405): log pages info from omap: nvmeof.group1.state
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:15] INFO discovery.py:328 (405): discovery addr: 2620:52:0:880:b696:91ff:fece:e508 port: 8009
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:15] INFO grpc.py:195 (2): Actual huge pages count is 4096
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:15] INFO state.py:556 (405): Cleanup OMAP on exit (discovery-cali003)
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:15] INFO grpc.py:318 (2): NVMeoF bdevs per cluster: 32
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:15] ERROR server.py:119 (405): GatewayServer exception occurred:
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: Traceback (most recent call last):
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]:   File "/remote-source/ceph-nvmeof/app/control/__main__.py", line 38, in <module>
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]:     gateway.serve()
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]:   File "/remote-source/ceph-nvmeof/app/control/server.py", line 191, in serve
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]:     self._start_discovery_service()
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]:   File "/remote-source/ceph-nvmeof/app/control/server.py", line 305, in _start_discovery_service
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]:     discovery.start_service()
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]:   File "/remote-source/ceph-nvmeof/app/control/discovery.py", line 1109, in start_service
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]:     self.sock.bind((self.discovery_addr, int(self.discovery_port)))
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: socket.gaierror: [Errno -9] Address family for hostname not supported
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:15] INFO server.py:483 (405): Terminating sub process of (client.nvmeof.nvmeof_pool.group1.cali003.ppjqlo) pid 19 args ['/usr/bin/ceph-nvmeof-monitor-client', '--gateway-name', 'client.nvmeof.nvmeof_pool.group1.cali003.ppjqlo', '--gateway-address', '[2620:52:0:880:b696:91ff:fece:e508]:5500', '--gateway-pool', 'nvmeof_pool', '--gateway-group', 'group1', '--monitor-group-address', '[2620:52:0:880:b696:91ff:fece:e508]:5499', '-c', '/etc/ceph/ceph.conf', '-n', 'client.nvmeof.nvmeof_pool.group1.cali003.ppjqlo', '-k', '/etc/ceph/keyring'] ...
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:15] INFO server.py:483 (405): Terminating sub process of (client.nvmeof.nvmeof_pool.group1.cali003.ppjqlo) pid 397 args ['/usr/local/bin/nvmf_tgt', '-u', '-r', '/var/tmp/spdk.sock', '-m', '0xF'] ...
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:15] INFO state.py:556 (405): Cleanup OMAP on exit (gateway-client.nvmeof.nvmeof_pool.group1.cali003.ppjqlo)
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:15] INFO server.py:208 (2): Prometheus endpoint is enabled
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:15] INFO prometheus.py:98 (2): Prometheus exporter running in http mode, listening on port 10008
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:15] INFO grpc.py:2791 (2): Received request to get gateway's info
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:15] INFO prometheus.py:131 (2): Stats for all bdevs will be provided
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [2024-10-11 07:54:15.424477] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:15] INFO grpc.py:2697 (2): Received request to set SPDK nvmf logs: log_level: WARNING, print_level: WARNING
Oct 11 07:54:15 cali003 ceph-acfbbdf6-87a2-11ef-9879-b49691cee384-nvmeof-nvmeof_pool-group1-cali003-ppjqlo[3703278]: [11-Oct-2024 07:54:15] INFO cephutils.py:104 (2): Registered nvmeof_pool.group1.cali003.ppjqlo to service_map!


[root@cali003 ~]# podman run --quiet --rm  cp.stg.icr.io/cp/ibm-ceph/nvmeof-cli-rhel9:1.3.2-1  --server-address 2620:52:0:880:b696:91ff:fece:e508 --server-port 5500 subsystem add  --subsystem nqn.2016-06.io.spdk:cnode1 -s 1 -m 1024   Failure adding subsystem nqn.2016-06.io.spdk:cnode1:
<_InactiveRpcError of RPC that terminated with:
        status = StatusCode.UNAVAILABLE
        details = "failed to connect to all addresses; last error: UNKNOWN: ipv6:%5B2620:52:0:880:b696:91ff:fece:e508%5D:5500: Network is unreachable"
        debug_error_string = "UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: ipv6:%5B2620:52:0:880:b696:91ff:fece:e508%5D:5500: Network is unreachable {created_time:"2024-10-11T08:32:41.735223597+00:00", grpc_status:14}"
>
[root@cali003 ~]# podman run --quiet --rm  cp.stg.icr.io/cp/ibm-ceph/nvmeof-cli-rhel9:1.3.2-1  --server-address 2620:52:0:880:b696:91ff:fece:e508 --server-port 5500 subsystem list
Failure listing subsystems:
<_InactiveRpcError of RPC that terminated with:
        status = StatusCode.UNAVAILABLE
        details = "failed to connect to all addresses; last error: UNKNOWN: ipv6:%5B2620:52:0:880:b696:91ff:fece:e508%5D:5500: Network is unreachable"
        debug_error_string = "UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: ipv6:%5B2620:52:0:880:b696:91ff:fece:e508%5D:5500: Network is unreachable {created_time:"2024-10-11T08:32:50.82722609+00:00", grpc_status:14}"
>

Comment 23 Gil Bregman 2025-05-11 12:59:22 UTC
CLI works for me using IPv6. Only, there are a few steps needed as it seems cephadm favors using IPv4. So, I've done:

- docker network create --ipv6 --subnet 2001:0DB8::/112 --subnet 10.243.64.0/24 ip6net
- ceph orch ls nvmeof --export > gil.yaml
- add "  addr: 2001:db8::1" to gil.yaml
- ceph orch apply -i gil.yaml
- ceph orch reconfig nvmeof.mypool.mygroup1
- docker run --network ip6net --rm quay.io/ceph/nvmeof-cli:1.5.3 --server-address 2001:db8::1 --server-port 5500 gw version

And I got:
```
Gateway's version: 1.5.3
```

@rlepaksh please try and see if it works for you.


Note You need to log in before you can comment on or make changes to this bug.