Bug 2247616 - NVMeof GW deployment failing with 0.0.5 version
Summary: NVMeof GW deployment failing with 0.0.5 version
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: NVMeOF
Version: 7.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 7.0
Assignee: Aviv Caro
QA Contact: Manohar Murthy
Rivka Pollack
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-11-02 12:00 UTC by Sunil Kumar Nagaraju
Modified: 2024-04-12 04:25 UTC (History)
9 users (show)

Fixed In Version: ceph-18.2.0-114.el9cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-12-13 15:24:49 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-7845 0 None None None 2023-11-02 12:02:10 UTC
Red Hat Product Errata RHBA-2023:7780 0 None None None 2023-12-13 15:24:52 UTC

Comment 1 Aviv Caro 2023-11-02 14:58:31 UTC
Please change the field "enable_discovery_controller" to "enable_spdk_discovery_controller" and it should be set to true.

Comment 2 Manohar Murthy 2023-11-02 15:35:27 UTC
Targetting it to 7.0 as we need v5 version for TP.

Comment 8 Sunil Kumar Nagaraju 2023-11-06 12:12:03 UTC
Deployment is successful with ceph version 18.2.0-116 and nvmeof image: 0.0.5-1

[ceph: root@ceph-1sunilkumar-uriqzt-node1-installer /]# ceph orch ps --daemon_type nvmeof --format json-pretty

[
  {
    "container_id": "6fdf6d9df670",
    "container_image_digests": [
      "registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof@sha256:1f1ffac74f2cd5f9748d4549872ecff5a958451edcae4bf6d0113c108607d743",
      "registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof@sha256:5a2d64954de518ffd33af352aa6d1d752f1f92af7da202487a0f744a340f37a3"
    ],
    "container_image_id": "756696fa8cf06dbdd4cece549353b7634c922321c0c5aeaaa9023e89ee4519a5",
    "container_image_name": "registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof:0.0.5-1",
    "cpu_percentage": "396.79%",
    "created": "2023-11-06T11:29:19.215452Z",
    "daemon_id": "rbd.ceph-1sunilkumar-uriqzt-node6.eoefgy",
    "daemon_name": "nvmeof.rbd.ceph-1sunilkumar-uriqzt-node6.eoefgy",
    "daemon_type": "nvmeof",
    "events": [
      "2023-11-06T11:29:19.242834Z daemon:nvmeof.rbd.ceph-1sunilkumar-uriqzt-node6.eoefgy [INFO] \"Deployed nvmeof.rbd.ceph-1sunilkumar-uriqzt-node6.eoefgy on host 'ceph-1sunilkumar-uriqzt-node6'\""
    ],
    "hostname": "ceph-1sunilkumar-uriqzt-node6",
    "is_active": false,
    "last_refresh": "2023-11-06T11:58:58.508235Z",
    "memory_usage": 39877345,
    "ports": [
      5500,
      4420,
      8009
    ],
    "service_name": "nvmeof.rbd",
    "started": "2023-11-06T11:48:44.884317Z",
    "status": 1,
    "status_desc": "running",
    "version": ""
  }
]

[ceph: root@ceph-1sunilkumar-uriqzt-node1-installer /]# ceph version 
ceph version 18.2.0-116.el9cp (6c6dfc1f2b2ce1896bf2696daea444dff96645af) reef (stable)


[ceph: root@ceph-1sunilkumar-uriqzt-node1-installer /]# 
[ceph: root@ceph-1sunilkumar-uriqzt-node1-installer /]# ceph orch ls
NAME                       PORTS             RUNNING  REFRESHED  AGE  PLACEMENT                      
alertmanager               ?:9093,9094           1/1  2m ago     69m  count:1                        
ceph-exporter                                    6/6  8m ago     69m  *                              
crash                                            6/6  8m ago     69m  *                              
grafana                    ?:3000                1/1  2m ago     69m  count:1                        
mgr                                              2/2  2m ago     68m  label:mgr                      
mon                                              3/3  2m ago     66m  label:mon                      
node-exporter              ?:9100                6/6  8m ago     69m  *                              
nvmeof.rbd                 ?:4420,5500,8009      1/1  8m ago     38m  ceph-1sunilkumar-uriqzt-node6  
osd.all-available-devices                         12  2m ago     62m  *                              
prometheus                 ?:9095                1/1  2m ago     69m  count:1  


[root@ceph-1sunilkumar-uriqzt-node6 ~]# podman ps -a | grep nvmeof.rbd
6fdf6d9df670  registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof:0.0.5-1                                                             -c ceph-nvmeof.co...  19 minutes ago     Up 19 minutes                          ceph-3f361aa2-7c93-11ee-83ea-fa163e37915e-nvmeof-rbd-ceph-1sunilkumar-uriqzt-node6-eoefgy

[root@ceph-1sunilkumar-uriqzt-node6 nvmeof.rbd.ceph-1sunilkumar-uriqzt-node6.eoefgy]# cat ceph-nvmeof.conf 
# This file is generated by cephadm.
[gateway]
name = client.nvmeof.rbd.ceph-1sunilkumar-uriqzt-node6.eoefgy
group = None
addr = 10.0.208.194
port = 5500
enable_auth = False
state_update_notify = True
state_update_interval_sec = 5
enable_spdk_discovery_controller = true

[ceph]
pool = rbd
config_file = /etc/ceph/ceph.conf
id = nvmeof.rbd.ceph-1sunilkumar-uriqzt-node6.eoefgy

[mtls]
server_key = ./server.key
client_key = ./client.key
server_cert = ./server.crt
client_cert = ./client.crt

[spdk]
tgt_path = /usr/local/bin/nvmf_tgt
rpc_socket = /var/tmp/spdk.sock
timeout = 60
log_level = WARN
conn_retries = 10
transports = tcp
transport_tcp_options = {"in_capsule_data_size": 8192, "max_io_qpairs_per_ctrlr": 7}
tgt_cmd_extra_args = --cpumask=0xF

Comment 9 Akash Raj 2023-11-21 07:10:53 UTC
Hi Aviv.

Could you please confirm if this BZ needs to be added to the 7.0 release notes? If so, please provide the doc type and text.

Thanks.

Comment 10 Aviv Caro 2023-11-21 08:07:48 UTC
Hi Akash, 

It is fixed. So if the release includes this fix, I don't see any reason to add it to RN. 

Aviv

Comment 11 errata-xmlrpc 2023-12-13 15:24:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:7780

Comment 12 Red Hat Bugzilla 2024-04-12 04:25:38 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.