Bug 2303178 - NVMeOF subsystems created through CLI are not getting reflected on Dashboard
Summary: NVMeOF subsystems created through CLI are not getting reflected on Dashboard
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Dashboard
Version: 8.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 8.0
Assignee: Afreen
QA Contact: Krishna Ramaswamy
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2317218
TreeView+ depends on / blocked
 
Reported: 2024-08-06 14:29 UTC by Rahul Lepakshi
Modified: 2024-11-25 09:04 UTC (History)
7 users (show)

Fixed In Version: ceph-19.1.0-29.el9cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-11-25 09:04:18 UTC
Embargoed:


Attachments (Terms of Use)
no_subsystems (15.51 KB, image/png)
2024-08-06 14:29 UTC, Rahul Lepakshi
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-9510 0 None None None 2024-08-22 11:20:42 UTC
Red Hat Issue Tracker RHCSDASH-1549 0 None None None 2024-08-22 11:20:47 UTC
Red Hat Product Errata RHBA-2024:10216 0 None None None 2024-11-25 09:04:23 UTC

Description Rahul Lepakshi 2024-08-06 14:29:29 UTC
Created attachment 2043564 [details]
no_subsystems

Description of problem:
After deploying nvmeof service on 2 nodes, I created 3 subsystems via CLI a below

[ceph: root@ceph-nvme-1-wzk17m-node1-installer /]# ceph orch ps | grep nvmeof
nvmeof.nvmeof.ceph-nvme-1-wzk17m-node5.mhuetk     ceph-nvme-1-wzk17m-node5            *:5500,4420,8009  running (9m)     9m ago   9m    40.5M        -                   f54100655853  9ddb0f3a14c3
nvmeof.nvmeof.ceph-nvme-1-wzk17m-node6.mmgzzz     ceph-nvme-1-wzk17m-node6            *:5500,4420,8009  running (9m)     9m ago   9m    40.7M        -                   f54100655853  bdbd610b473e

[root@ceph-nvme-1-wzk17m-node5 cephuser]# podman run --quiet --rm cp.stg.icr.io/cp/ibm-ceph/nvmeof-cli-rhel9:1.2.17-8  --server-address 10.0.65.219 --server-port 5500 subsystem add  --subsystem nqn.2016-06.io.spdk:cnode1 --max-namespaces 32
Adding subsystem nqn.2016-06.io.spdk:cnode1: Successful
[root@ceph-nvme-1-wzk17m-node5 cephuser]# podman run --quiet --rm cp.stg.icr.io/cp/ibm-ceph/nvmeof-cli-rhel9:1.2.17-8  --server-address 10.0.65.219 --server-port 5500 subsystem add  --subsystem nqn.2016-06.io.spdk:cnode2 --max-namespaces 32
Adding subsystem nqn.2016-06.io.spdk:cnode2: Successful
[root@ceph-nvme-1-wzk17m-node5 cephuser]# podman run --quiet --rm cp.stg.icr.io/cp/ibm-ceph/nvmeof-cli-rhel9:1.2.17-8  --server-address 10.0.65.219 --server-port 5500 subsystem add  --subsystem nqn.2016-06.io.spdk:cnode3 --max-namespaces 1024
Adding subsystem nqn.2016-06.io.spdk:cnode3: Successful

[root@ceph-nvme-1-wzk17m-node5 cephuser]# podman run --quiet --rm cp.stg.icr.io/cp/ibm-ceph/nvmeof-cli-rhel9:1.2.17-8  --server-address 10.0.65.219 subsystem list
Subsystems:
╒═══════════╤════════════════════════════╤════════════╤════════════════════╤══════════════════╤═════════════╤══════════════╕
│ Subtype   │ NQN                        │ HA State   │ Serial             │ Controller IDs   │   Namespace │          Max │
│           │                            │            │ Number             │                  │       Count │   Namespaces │
╞═══════════╪════════════════════════════╪════════════╪════════════════════╪══════════════════╪═════════════╪══════════════╡
│ NVMe      │ nqn.2016-06.io.spdk:cnode1 │ enabled    │ Ceph19065165288082 │ 2041-4080        │           0 │           32 │
├───────────┼────────────────────────────┼────────────┼────────────────────┼──────────────────┼─────────────┼──────────────┤
│ NVMe      │ nqn.2016-06.io.spdk:cnode2 │ enabled    │ Ceph56710757716381 │ 2041-4080        │           0 │           32 │
├───────────┼────────────────────────────┼────────────┼────────────────────┼──────────────────┼─────────────┼──────────────┤
│ NVMe      │ nqn.2016-06.io.spdk:cnode3 │ enabled    │ Ceph5511596265654  │ 2041-4080        │           0 │         1024 │
╘═══════════╧════════════════════════════╧════════════╧════════════════════╧══════════════════╧═════════════╧══════════════╛

But the same are not visible on dashboard at Block>>NVMe/TCP>>Subsystems
 

Version-Release number of selected component (if applicable):
[ceph: root@ceph-nvme-1-wzk17m-node1-installer /]# ceph version
ceph version 19.1.0-16.el9cp (2cf55deb566fb4f798204b7f111579c3e240b84a) squid

nvmeof - cp.stg.icr.io/cp/ibm-ceph/nvmeof-rhel9:1.2.17-6


How reproducible: Always


Steps to Reproduce:
1. Deploy nvmeof service with "orch apply" command on admin node and create subsystems - successful deployment and subsystem creation, even service is visible on dashboard at Administration >> Services
2. Navigate to Block>>NVMe/TCP>>Subsystems to see the subsystems created via CLI


Actual results: No subsystems are seen at Block>>NVMe/TCP>>Subsystems


Expected results: we expect CLI and dashboard are intact and same configuration is seen throughout


Additional info: See attachment

Comment 8 errata-xmlrpc 2024-11-25 09:04:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 8.0 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:10216


Note You need to log in before you can comment on or make changes to this bug.