Bug 2323851

Summary: ceph versions command shows wrong information of nvmeof
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Krishna Ramaswamy <kramaswa>
Component: NVMeOFAssignee: harika chebrolu <hchebrol>
Status: VERIFIED --- QA Contact: Rahul Lepakshi <rlepaksh>
Severity: urgent Docs Contact: ceph-doc-bot <ceph-doc-bugzilla>
Priority: urgent    
Version: 8.0CC: aindenba, bdavidov, bkunal, cephqe-warriors, gbregman, hchebrol, kjosy, lchernin, linuxkidd, pdhange, rlepaksh, rpollack, tserlin, vumrao
Target Milestone: ---Keywords: External
Target Release: 8.0z2Flags: rlepaksh: needinfo-
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-19.2.0-61.el9cp Doc Type: Known Issue
Doc Text:
.Incorrect NVMe-oF gateway version output from `ceph versions` command When using the `ceph versions` command, the NVMe-oF gateway (`nvmeof`) version displays, but shows the IBM Storage Ceph version instead. As a workaround, use the `gw info` command. ---- nvmeof-cli --server-address GATEWAY_IP --server-port SERVER_PORT gw info ----
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2317218    

Description Krishna Ramaswamy 2024-11-05 12:42:44 UTC
Description of problem:After upgrading from 19.2.0-49.el9cp to 19.2.0-52.el9cp ceph versions command shows wrong information of nvmeof


Version-Release number of selected component (if applicable):
[ceph: root@dhcp44-28 /]# ceph versions
{
    "mon": {
        "ceph version 19.2.0-52.el9cp (198c75de92aec6de59bc20028c0453bf3e4a0fa7) squid (stable)": 5
    },
    "mgr": {
        "ceph version 19.2.0-52.el9cp (198c75de92aec6de59bc20028c0453bf3e4a0fa7) squid (stable)": 2
    },
    "osd": {
        "ceph version 19.2.0-52.el9cp (198c75de92aec6de59bc20028c0453bf3e4a0fa7) squid (stable)": 15
    },
    "nvmeof": {
        "ceph version 19.2.0-52.el9cp (198c75de92aec6de59bc20028c0453bf3e4a0fa7) squid (stable)": 4
    },
    "overall": {
        "ceph version 19.2.0-52.el9cp (198c75de92aec6de59bc20028c0453bf3e4a0fa7) squid (stable)": 26
    }
}



Steps to Reproduce:
1.Upgrade ceph8.0 cluster from 19.2.0-49.el9cp to 19.2.0-52.el9cp ceph versions
2.ceph versions command


Actual results: "nvmeof": {
        "ceph version 19.2.0-52.el9cp (198c75de92aec6de59bc20028c0453bf3e4a0fa7) squid (stable)": 4
    },


Expected results: Either it should display the correct version of nvmeof or it should not display at all in the ceph space


Additional info:
-----------------

[ceph: root@dhcp44-28 /]# ceph orch ls
NAME                       PORTS             RUNNING  REFRESHED  AGE  PLACEMENT
alertmanager               ?:9093,9094           1/1  4s ago     6h   count:1
ceph-exporter                                    5/5  34s ago    2w   count:5
crash                                            5/5  34s ago    2w   count:5
mgr                                              2/2  19s ago    2w   count:2
mon                                              5/5  34s ago    2w   count:5
node-exporter              ?:9100                5/5  34s ago    13d  *
nvmeof.rbd1.grp1           ?:4420,5500,8009      2/2  4s ago     15s  dhcp46-120.lab.eng.blr.redhat.com;dhcp47-112.lab.eng.blr.redhat.com
nvmeof.rbd1.grp2           ?:4420,5500,8009      2/2  34s ago    48s  dhcp44-69.lab.eng.blr.redhat.com;dhcp46-149.lab.eng.blr.redhat.com
osd.all-available-devices                         12  34s ago    2w   *
prometheus                 ?:9095                1/1  4s ago     8h   count:1
[ceph: root@dhcp44-28 /]# ceph orch ls
NAME                       PORTS             RUNNING  REFRESHED  AGE  PLACEMENT
alertmanager               ?:9093,9094           1/1  34s ago    6h   count:1
ceph-exporter                                    5/5  64s ago    2w   count:5
crash                                            5/5  64s ago    2w   count:5
mgr                                              2/2  49s ago    2w   count:2
mon                                              5/5  64s ago    2w   count:5
node-exporter              ?:9100                5/5  64s ago    13d  *
nvmeof.rbd1.grp1           ?:4420,5500,8009      2/2  34s ago    46s  dhcp46-120.lab.eng.blr.redhat.com;dhcp47-112.lab.eng.blr.redhat.com
nvmeof.rbd1.grp2           ?:4420,5500,8009      2/2  64s ago    78s  dhcp44-69.lab.eng.blr.redhat.com;dhcp46-149.lab.eng.blr.redhat.com
osd.all-available-devices                         12  64s ago    2w   *
prometheus                 ?:9095                1/1  34s ago    8h   count:1
[ceph: root@dhcp44-28 /]# ceph versions
{
    "mon": {
        "ceph version 19.2.0-52.el9cp (198c75de92aec6de59bc20028c0453bf3e4a0fa7) squid (stable)": 5
    },
    "mgr": {
        "ceph version 19.2.0-52.el9cp (198c75de92aec6de59bc20028c0453bf3e4a0fa7) squid (stable)": 2
    },
    "osd": {
        "ceph version 19.2.0-52.el9cp (198c75de92aec6de59bc20028c0453bf3e4a0fa7) squid (stable)": 15
    },
    "nvmeof": {
        "ceph version 19.2.0-52.el9cp (198c75de92aec6de59bc20028c0453bf3e4a0fa7) squid (stable)": 4
    },
    "overall": {
        "ceph version 19.2.0-52.el9cp (198c75de92aec6de59bc20028c0453bf3e4a0fa7) squid (stable)": 26
    }
}
[ceph: root@dhcp44-28 /]# ceph config dump | grep nvme
mgr                      advanced  mgr/cephadm/container_image_nvmeof              cp.stg.icr.io/cp/ibm-ceph/nvmeof-rhel9:1.3.3-6                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              *
[ceph: root@dhcp44-28 /]# ceph -s
  cluster:
    id:     a0b76122-8b89-11ef-8c22-005056bb8f0a
    health: HEALTH_OK

  services:
    mon:    5 daemons, quorum dhcp44-28,dhcp46-120,dhcp47-112,dhcp46-149,dhcp44-69 (age 79m)
    mgr:    dhcp44-28.luvhri(active, since 8h), standbys: dhcp46-120.xfftck
    osd:    15 osds: 15 up (since 72m), 15 in (since 86m)
    nvmeof: 4 gateways active (4 hosts)

  data:
    pools:   5 pools, 129 pgs
    objects: 4.76k objects, 3.0 GiB
    usage:   9.7 GiB used, 3.6 TiB / 3.6 TiB avail
    pgs:     129 active+clean

  io:
    client:   525 KiB/s rd, 5 op/s rd, 0 op/s wr

[ceph: root@dhcp44-28 /]#

Comment 1 Storage PM bot 2024-11-05 12:42:54 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 3 Vallari 2025-01-16 11:14:32 UTC
Merged upstream PR: https://github.com/ceph/ceph/pull/61349

Opened downstream 8.0 fix MR: https://gitlab.cee.redhat.com/ceph/ceph/-/merge_requests/894

Comment 4 Vallari 2025-01-22 06:29:14 UTC
Commit pushed to ceph-8.0-rhel-patches branch: https://gitlab.cee.redhat.com/ceph/ceph/-/commit/1addfd37086eff688a3ec62ee4b6aa98d5982a31

Comment 5 Vallari 2025-01-22 07:08:12 UTC
Now, the "ceph versions" command should not include nvmeof service. 

```
[root@ceph-nvme-vm14 ~]# ceph versions
{
    "mon": {
        "ceph version 19.3.0-6956-g3df0b2f9 (3df0b2f949c732e4f2f0bda96b8a05766563cfe7) squid (dev)": 4
    },
    "mgr": {
        "ceph version 19.3.0-6956-g3df0b2f9 (3df0b2f949c732e4f2f0bda96b8a05766563cfe7) squid (dev)": 4
    },
    "osd": {
        "ceph version 19.3.0-6956-g3df0b2f9 (3df0b2f949c732e4f2f0bda96b8a05766563cfe7) squid (dev)": 4
    },
    "overall": {
        "ceph version 19.3.0-6956-g3df0b2f9 (3df0b2f949c732e4f2f0bda96b8a05766563cfe7) squid (dev)": 12
    }
}
[root@ceph-nvme-vm14 ~]# ceph orch ls
NAME                       PORTS             RUNNING  REFRESHED  AGE  PLACEMENT
alertmanager               ?:9093,9094           1/1  6m ago     8h   count:1
ceph-exporter              ?:9926                4/4  6m ago     8h   *
crash                                            4/4  6m ago     8h   *
grafana                    ?:3000                1/1  6m ago     8h   count:1
mgr                                              4/4  6m ago     8h   label:mgr
mon                                              4/4  6m ago     8h   label:mon
node-exporter              ?:9100                4/4  6m ago     8h   *
nvmeof.mypool.mygroup1     ?:4420,5500,8009      4/4  6m ago     8h   ceph-nvme-vm14;ceph-nvme-vm13;ceph-nvme-vm12;ceph-nvme-vm11
osd.all-available-devices                          4  6m ago     8h   *
prometheus                 ?:9095                1/1  6m ago     8h   count:1
[root@ceph-nvme-vm14 ~]#
```