Bug 2240580

Summary: [IPv6 compatibility] Unable to deploy nvmeof service on ceph cluster deployed using IPv6address
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Rahul Lepakshi <rlepaksh>
Component: NVMeOFAssignee: Aviv Caro <aviv.caro>
Status: ON_QA --- QA Contact: JAYA PRAKASH P <jprakash>
Severity: urgent Docs Contact: ceph-doc-bot <ceph-doc-bugzilla>
Priority: unspecified    
Version: 7.1CC: acaro, aviv.caro, cephqe-warriors, gbregman, kdreyer, tserlin
Target Milestone: ---Keywords: External
Target Release: 8.1Flags: rlepaksh: needinfo-
rlepaksh: needinfo-
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-nvmeof-1.3.3 Doc Type: Release Note
Doc Text:
IPv6 is not supported for nvmeof GW deployment on 8.0
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2317218    

Description Rahul Lepakshi 2023-09-25 11:03:26 UTC
Description of problem:
Unable to deploy nvmeof service on a ceph cluster bootstrapped using IPv6 address

ceph version 18.0.0-6366-g2015892b (2015892b5832dcdc27c53a96056485e6013c006e) reef (dev)

[root@cali005 ~]# podman pull quay.io/ceph/nvmeof:0.0.3
Trying to pull quay.io/ceph/nvmeof:0.0.3...
Getting image source signatures
Copying blob a4e7c653afb4 skipped: already exists
Copying blob 6989a1c886e5 skipped: already exists
Copying blob 06bba564ae5b skipped: already exists
Copying blob a841c30977c7 skipped: already exists
Copying blob 3715913301d6 skipped: already exists
Copying blob a6d315934e78 skipped: already exists
Copying blob b0bde5ce55ef skipped: already exists
Copying config 470dd4ee78 done
Writing manifest to image destination
Storing signatures
470dd4ee78f06cb4bea675003f5e9932a1c35f0a33b7fc19cadd2909cc02548a

[ceph: root@cali001 /]# ceph orch apply nvmeof rbd --placement="cali005"
Scheduled nvmeof.rbd update...

[ceph: root@cali001 /]# ceph orch ls
NAME                       PORTS             RUNNING  REFRESHED  AGE  PLACEMENT
alertmanager               ?:9093,9094           1/1  -          41m  count:1
ceph-exporter                                    5/5  -          41m  *
crash                                            5/5  -          41m  *
grafana                    ?:3000                1/1  -          41m  count:1
mgr                                              2/2  -          41m  count:2
mon                                              5/5  -          41m  count:5
node-exporter              ?:9100                5/5  -          41m  *
nvmeof.rbd                 ?:4420,5500,8009      0/1  -          16m  cali005
osd.all-available-devices                         25  -          25m  *
prometheus                 ?:9095                1/1  -          41m  count:1

[ceph: root@cali001 /]# ceph orch ps
mgr.cali001.ippkmp         cali001  *:9283,8765,8443  running (35m)          -  35m     513M        -  18.0.0-6366-g2015892b  1fc148641c22  bd746cba6dc9
mgr.cali004.qwkhtr         cali004  *:8443,9283,8765  running (29m)          -  29m     444M        -  18.0.0-6366-g2015892b  1fc148641c22  9c6e3c545046
mon.cali001                cali001                    running (35m)          -  35m    61.4M    2048M  18.0.0-6366-g2015892b  1fc148641c22  dec679b511a0
mon.cali004                cali004                    running (29m)          -  29m    58.7M    2048M  18.0.0-6366-g2015892b  1fc148641c22  f8ae67a07ac8
mon.cali005                cali005                    running (20m)          -  20m    47.8M    2048M  18.0.0-6366-g2015892b  1fc148641c22  795ec92a485d
mon.cali008                cali008                    running (19m)          -  19m    54.4M    2048M  18.0.0-6366-g2015892b  1fc148641c22  6612c20c4ea1
mon.cali010                cali010                    running (18m)          -  18m    41.2M    2048M  18.0.0-6366-g2015892b  1fc148641c22  1f4ffda87f09
nvmeof.rbd.cali005.oqcngh  cali005  *:5500,4420,8009  unknown                -  10m        -        -  <unknown>              <unknown>     <unknown>
osd.0                      cali005                    running (17m)          -  17m    47.8M    13.8G  18.0.0-6366-g2015892b  1fc148641c22  6c2e0c68ccc7
osd.1                      cali010                    running (17m)          -  17m    45.2M    16.8G  18.0.0-6366-g2015892b  1fc148641c22  98961119a4d5
osd.2                      cali008                    running (17m)          -  17m    52.8M    21.0G  18.0.0-6366-g2015892b  1fc148641c22  347059f36ba2
osd.3                      cali004                    running (18m)          -  18m    47.8M    26.7G  18.0.0-6366-g2015892b  1fc148641c22  7ccadf317be9
osd.4                      cali001                    running (17m)          -  17m    43.0M    11.0G  18.0.0-6366-g2015892b  1fc148641c22  674cd80f8a2c

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Bootstrap a ceph cluster with mon IP belonging to ipv6 address family 
2. Deploy mon, mgr and osd services - successful
3. Try pulling nvmeof images - successful
4. Try deploying nvmeof service with "ceph orch apply nvmeof rbd --placement="cali005""

Actual results: nvmeof service does not get deployed


Expected results: we should have IPv6 compatibility with nvmeof service


Additional info:

Comment 1 Aviv Caro 2023-09-26 15:23:02 UTC
Rahul, can you add some logs? What is failing?

Comment 2 Rahul Lepakshi 2023-09-27 06:21:02 UTC
Hi Aviv, 

Providing logs, Service is failing to bind to ipv6 address, we could move this BZ to cephadm component based IMO, please suggest

Journalctl --->
# journalctl -u ceph-46c53906-5b8c-11ee-8719-b49691cee384.cali005.oqcngh.service
Sep 25 10:40:08 cali005 systemd[1]: Starting Ceph nvmeof.rbd.cali005.oqcngh for 46c53906-5b8c-11ee-8719-b49691cee384...
Sep 25 10:40:08 cali005 bash[44624]: Trying to pull quay.io/ceph/nvmeof:0.0.3...
Sep 25 10:40:09 cali005 bash[44624]: Getting image source signatures
Sep 25 10:40:09 cali005 bash[44624]: Copying blob sha256:a4e7c653afb4c1c2c9ba57a7abf54ab4f496f6a4e2282546679d244a29350ae9
Sep 25 10:40:09 cali005 bash[44624]: Copying blob sha256:3715913301d65803af420e8191a52dcf9acdcd5b7a076a1c4eb784d76397424e
Sep 25 10:40:09 cali005 bash[44624]: Copying blob sha256:b0bde5ce55efe317b70dd6807c808fbc69a2f26b3e4e9eec5745d76d32a2f6fe
Sep 25 10:40:09 cali005 bash[44624]: Copying blob sha256:06bba564ae5b7f33b8ce87b1920e373db36a1af10a74abd7860688c896c584f5
Sep 25 10:40:09 cali005 bash[44624]: Copying blob sha256:a841c30977c7a8f1922fd3c582ebf003bb73ce84c0fd9616f85b653839d1bafe
Sep 25 10:40:09 cali005 bash[44624]: Copying blob sha256:6989a1c886e5ac18b6267e87f425875ae7362274da6133a1bd6d7f8b82b1f5cb
Sep 25 10:40:09 cali005 bash[44624]: Copying blob sha256:a6d315934e783fe9aa44b70a2e10fc1d1b5b4e9b66e5e79765d6a38fd2159b64
Sep 25 10:40:15 cali005 bash[44624]: Copying config sha256:470dd4ee78f06cb4bea675003f5e9932a1c35f0a33b7fc19cadd2909cc02548a
Sep 25 10:40:15 cali005 bash[44624]: Writing manifest to image destination
Sep 25 10:40:15 cali005 bash[44624]: Storing signatures
Sep 25 10:40:15 cali005 podman[44624]:
Sep 25 10:40:15 cali005 podman[44624]: 2023-09-25 10:40:15.460476409 +0000 UTC m=+7.099066097 container create cc4319266bd6aa706a202f5ee80c57f5dc1662edd0560b78306487c072b700bd (image=quay.io/ceph/nvmeof:0.0.3>
Sep 25 10:40:15 cali005 podman[44624]: 2023-09-25 10:40:08.382438777 +0000 UTC m=+0.021028466 image pull  quay.io/ceph/nvmeof:0.0.3
Sep 25 10:40:15 cali005 podman[44624]: 2023-09-25 10:40:15.50866632 +0000 UTC m=+7.147256007 container init cc4319266bd6aa706a202f5ee80c57f5dc1662edd0560b78306487c072b700bd (image=quay.io/ceph/nvmeof:0.0.3, n>
Sep 25 10:40:15 cali005 podman[44624]: 2023-09-25 10:40:15.511719786 +0000 UTC m=+7.150309481 container start cc4319266bd6aa706a202f5ee80c57f5dc1662edd0560b78306487c072b700bd (image=quay.io/ceph/nvmeof:0.0.3,>
Sep 25 10:40:15 cali005 bash[44624]: cc4319266bd6aa706a202f5ee80c57f5dc1662edd0560b78306487c072b700bd
Sep 25 10:40:15 cali005 systemd[1]: Started Ceph nvmeof.rbd.cali005.oqcngh for 46c53906-5b8c-11ee-8719-b49691cee384.
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: INFO:control.server:Starting gateway client.nvmeof.rbd.cali005.oqcngh
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: DEBUG:control.server:Starting serve
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: DEBUG:control.server:Configuring server client.nvmeof.rbd.cali005.oqcngh
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: INFO:control.server:SPDK Target Path: /usr/local/bin/nvmf_tgt
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: INFO:control.server:SPDK Socket: /var/tmp/spdk.sock
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: INFO:control.server:Starting /usr/local/bin/nvmf_tgt -u -r /var/tmp/spdk.sock
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: INFO:control.server:Attempting to initialize SPDK: rpc_socket: /var/tmp/spdk.sock, conn_retries: 300, timeou>
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: INFO: Setting log level to WARN
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: INFO:JSONRPCClient(/var/tmp/spdk.sock):Setting log level to WARN
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: [2023-09-25 10:40:15.720280] Starting SPDK v23.01.1 / DPDK 22.11.1 initialization...
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: [2023-09-25 10:40:15.720343] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --no-pci --huge-unlink --log-lev>
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: TELEMETRY: No legacy callbacks, legacy socket not created
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: [2023-09-25 10:40:15.830386] app.c: 712:spdk_app_start: *NOTICE*: Total cores available: 1
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: [2023-09-25 10:40:15.871749] reactor.c: 926:reactor_run: *NOTICE*: Reactor started on core 0
Sep 25 10:40:15 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: [2023-09-25 10:40:15.903814] accel_sw.c: 681:sw_accel_module_init: *NOTICE*: Accel framework software module>
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: DEBUG:control.server:create_transport: tcp options: {"in_capsule_data_size": 8192, "max_io_qpairs_per_ctrlr">
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: [2023-09-25 10:40:16.114947] tcp.c: 629:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: INFO:control.state:First gateway: created object nvmeof.state
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: E0925 10:40:16.156623077       2 chttp2_server.cc:1045]                UNKNOWN:Name or service not known {gr>
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: ERROR:control.server:GatewayServer exception occurred:
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]: Traceback (most recent call last):
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]:   File "/src/control/__main__.py", line 35, in <module>
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]:     gateway.serve()
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]:   File "/src/control/server.py", line 108, in serve
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]:     self._add_server_listener()
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]:   File "/src/control/server.py", line 146, in _add_server_listener
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]:     self.server.add_insecure_port("{}:{}".format(
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]:   File "/src/__pypackages__/3.9/lib/grpc/_server.py", line 1101, in add_insecure_port
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]:     return _common.validate_port_binding_result(
Sep 25 10:40:16 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[44898]:   File "/src/__pypackages__/3.9/lib/grpc/_common.py", line 175, in validate_port_binding_result
...skipping...
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: [2023-09-25 10:41:01.502870] app.c: 712:spdk_app_start: *NOTICE*: Total cores available: 1
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: [2023-09-25 10:41:01.539937] reactor.c: 926:reactor_run: *NOTICE*: Reactor started on core 0
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: [2023-09-25 10:41:01.569850] accel_sw.c: 681:sw_accel_module_init: *NOTICE*: Accel framework software module>
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: DEBUG:control.server:create_transport: tcp options: {"in_capsule_data_size": 8192, "max_io_qpairs_per_ctrlr">
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: [2023-09-25 10:41:01.588555] tcp.c: 629:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: INFO:control.state:nvmeof.state omap object already exists.
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: E0925 10:41:01.630135302       2 chttp2_server.cc:1045]                UNKNOWN:Name or service not known {cr>
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: ERROR:control.server:GatewayServer exception occurred:
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: Traceback (most recent call last):
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/src/control/__main__.py", line 35, in <module>
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     gateway.serve()
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/src/control/server.py", line 108, in serve
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     self._add_server_listener()
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/src/control/server.py", line 146, in _add_server_listener
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     self.server.add_insecure_port("{}:{}".format(
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/src/__pypackages__/3.9/lib/grpc/_server.py", line 1101, in add_insecure_port
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     return _common.validate_port_binding_result(
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/src/__pypackages__/3.9/lib/grpc/_common.py", line 175, in validate_port_binding_result
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     raise RuntimeError(_ERROR_MESSAGE_PORT_BINDING_FAILED % address)
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: RuntimeError: Failed to bind to address 2620:52:0:880:b696:91ff:fece:e844:5500; set GRPC_VERBOSITY=debug env>
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: INFO:control.server:Terminating SPDK(client.nvmeof.rbd.cali005.oqcngh) pid 3...
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: INFO:control.server:Stopping the server...
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: INFO:control.server:Exiting the gateway process.
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: Traceback (most recent call last):
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     return _run_code(code, main_globals, None,
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     exec(code, run_globals)
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/src/control/__main__.py", line 35, in <module>
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     gateway.serve()
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/src/control/server.py", line 108, in serve
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     self._add_server_listener()
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/src/control/server.py", line 146, in _add_server_listener
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     self.server.add_insecure_port("{}:{}".format(
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/src/__pypackages__/3.9/lib/grpc/_server.py", line 1101, in add_insecure_port
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     return _common.validate_port_binding_result(
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:   File "/src/__pypackages__/3.9/lib/grpc/_common.py", line 175, in validate_port_binding_result
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]:     raise RuntimeError(_ERROR_MESSAGE_PORT_BINDING_FAILED % address)
Sep 25 10:41:01 cali005 ceph-46c53906-5b8c-11ee-8719-b49691cee384-nvmeof-rbd-cali005-oqcngh[46712]: RuntimeError: Failed to bind to address 2620:52:0:880:b696:91ff:fece:e844:5500; set GRPC_VERBOSITY=debug env>
Sep 25 10:41:01 cali005 podman[46774]: 2023-09-25 10:41:01.796895848 +0000 UTC m=+0.024335577 container died 110326e7efe571cbc6c26038aee94d9c7db6ba95bbccf8c09f1b6524f1bbc019 (image=quay.io/ceph/nvmeof:0.0.3, >
Sep 25 10:41:01 cali005 podman[46774]: 2023-09-25 10:41:01.808635559 +0000 UTC m=+0.036075281 container remove 110326e7efe571cbc6c26038aee94d9c7db6ba95bbccf8c09f1b6524f1bbc019 (image=quay.io/ceph/nvmeof:0.0.3>
Sep 25 10:41:01 cali005 systemd[1]: ceph-46c53906-5b8c-11ee-8719-b49691cee384.cali005.oqcngh.service: Main process exited, code=exited, status=1/FAILURE


ceph-nvmeof.conf file ---->
# This file is generated by cephadm.
[gateway]
name = client.nvmeof.rbd.cali005.oqcngh
group =
addr = 2620:52:0:880:b696:91ff:fece:e844
port = 5500
enable_auth = False
state_update_notify = True
state_update_interval_sec = 5

[ceph]
pool = rbd
config_file = /etc/ceph/ceph.conf
id = nvmeof.rbd.cali005.oqcngh

[mtls]
server_key = ./server.key
client_key = ./client.key
server_cert = ./server.crt
client_cert = ./client.crt

[spdk]
tgt_path = /usr/local/bin/nvmf_tgt
rpc_socket = /var/tmp/spdk.sock
timeout = 60
log_level = WARN
conn_retries = 10
transports = tcp
transport_tcp_options = {"in_capsule_data_size": 8192, "max_io_qpairs_per_ctrlr": 7}


ceph.conf file --->
# minimal ceph.conf for 46c53906-5b8c-11ee-8719-b49691cee384
[global]
        fsid = 46c53906-5b8c-11ee-8719-b49691cee384
        mon_host = [v2:[2620:52:0:880:b696:91ff:fece:e384]:3300/0,v1:[2620:52:0:880:b696:91ff:fece:e384]:6789/0] [v2:[2620:52:0:880:b696:91ff:fecd:dbae]:3300/0,v1:[2620:52:0:880:b696:91ff:fecd:dbae]:6789/0] [v2:[2620:52:0:880:b696:91ff:fece:e844]:3300/0,v1:[2620:52:0:880:b696:91ff:fece:e844]:6789/0] [v2:[2620:52:0:880:b696:91ff:fece:e01c]:3300/0,v1:[2620:52:0:880:b696:91ff:fece:e01c]:6789/0] [v2:[2620:52:0:880:b696:91ff:fece:e1b0]:3300/0,v1:[2620:52:0:880:b696:91ff:fece:e1b0]:6789/0]


ceph keyring file --->
[client.nvmeof.rbd.cali005.oqcngh]
key = AQCHYxFlHO7EFRAAHQgMlzfUufRrlZ7JmO5Kpg==

Comment 3 Aviv Caro 2023-10-03 07:22:43 UTC
This is handled in https://github.com/ceph/ceph-nvmeof/issues/247. But for TP we will probably won't have a solution. This should be listed as a limitation for 7.0.

Comment 4 Rahul Lepakshi 2023-10-04 03:45:36 UTC
Moving this to 7.1 as per above comment and https://ibm-systems-storage.slack.com/archives/C04QC5EGBPU/p1696352787558109?thread_ts=1696317891.811259&cid=C04QC5EGBPU

Comment 6 Aviv Caro 2024-01-21 13:56:55 UTC
Rahul can you try to reproduce on 0.0.7 and beyond. This should be fixed.

Comment 8 Ken Dreyer (Red Hat) 2024-01-30 15:09:42 UTC
we need 0.0.7 (or a newer version) downstream. https://pkgs.devel.redhat.com/cgit/containers/ceph-nvmeof/log/?h=ceph-7.0-rhel-9 has 0.0.5.

Comment 9 Aviv Caro 2024-04-02 19:18:37 UTC
IPv6 is not something we committed to support in 7.1.