Bug 2303178

Summary: NVMeOF subsystems created through CLI are not getting reflected on Dashboard
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Rahul Lepakshi <rlepaksh>
Component: Ceph-DashboardAssignee: Afreen <afrahman>
Status: CLOSED ERRATA QA Contact: Krishna Ramaswamy <kramaswa>
Severity: high Docs Contact: Akash Raj <akraj>
Priority: high    
Version: 8.0CC: afrahman, akraj, ceph-eng-bugs, cephqe-warriors, mmurthy, tserlin, vdas
Target Milestone: ---   
Target Release: 8.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-19.1.0-29.el9cp Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-11-25 09:04:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2317218    
Attachments:
Description Flags
no_subsystems none

Description Rahul Lepakshi 2024-08-06 14:29:29 UTC
Created attachment 2043564 [details]
no_subsystems

Description of problem:
After deploying nvmeof service on 2 nodes, I created 3 subsystems via CLI a below

[ceph: root@ceph-nvme-1-wzk17m-node1-installer /]# ceph orch ps | grep nvmeof
nvmeof.nvmeof.ceph-nvme-1-wzk17m-node5.mhuetk     ceph-nvme-1-wzk17m-node5            *:5500,4420,8009  running (9m)     9m ago   9m    40.5M        -                   f54100655853  9ddb0f3a14c3
nvmeof.nvmeof.ceph-nvme-1-wzk17m-node6.mmgzzz     ceph-nvme-1-wzk17m-node6            *:5500,4420,8009  running (9m)     9m ago   9m    40.7M        -                   f54100655853  bdbd610b473e

[root@ceph-nvme-1-wzk17m-node5 cephuser]# podman run --quiet --rm cp.stg.icr.io/cp/ibm-ceph/nvmeof-cli-rhel9:1.2.17-8  --server-address 10.0.65.219 --server-port 5500 subsystem add  --subsystem nqn.2016-06.io.spdk:cnode1 --max-namespaces 32
Adding subsystem nqn.2016-06.io.spdk:cnode1: Successful
[root@ceph-nvme-1-wzk17m-node5 cephuser]# podman run --quiet --rm cp.stg.icr.io/cp/ibm-ceph/nvmeof-cli-rhel9:1.2.17-8  --server-address 10.0.65.219 --server-port 5500 subsystem add  --subsystem nqn.2016-06.io.spdk:cnode2 --max-namespaces 32
Adding subsystem nqn.2016-06.io.spdk:cnode2: Successful
[root@ceph-nvme-1-wzk17m-node5 cephuser]# podman run --quiet --rm cp.stg.icr.io/cp/ibm-ceph/nvmeof-cli-rhel9:1.2.17-8  --server-address 10.0.65.219 --server-port 5500 subsystem add  --subsystem nqn.2016-06.io.spdk:cnode3 --max-namespaces 1024
Adding subsystem nqn.2016-06.io.spdk:cnode3: Successful

[root@ceph-nvme-1-wzk17m-node5 cephuser]# podman run --quiet --rm cp.stg.icr.io/cp/ibm-ceph/nvmeof-cli-rhel9:1.2.17-8  --server-address 10.0.65.219 subsystem list
Subsystems:
╒═══════════╤════════════════════════════╤════════════╤════════════════════╤══════════════════╤═════════════╤══════════════╕
│ Subtype   │ NQN                        │ HA State   │ Serial             │ Controller IDs   │   Namespace │          Max │
│           │                            │            │ Number             │                  │       Count │   Namespaces │
╞═══════════╪════════════════════════════╪════════════╪════════════════════╪══════════════════╪═════════════╪══════════════╡
│ NVMe      │ nqn.2016-06.io.spdk:cnode1 │ enabled    │ Ceph19065165288082 │ 2041-4080        │           0 │           32 │
├───────────┼────────────────────────────┼────────────┼────────────────────┼──────────────────┼─────────────┼──────────────┤
│ NVMe      │ nqn.2016-06.io.spdk:cnode2 │ enabled    │ Ceph56710757716381 │ 2041-4080        │           0 │           32 │
├───────────┼────────────────────────────┼────────────┼────────────────────┼──────────────────┼─────────────┼──────────────┤
│ NVMe      │ nqn.2016-06.io.spdk:cnode3 │ enabled    │ Ceph5511596265654  │ 2041-4080        │           0 │         1024 │
╘═══════════╧════════════════════════════╧════════════╧════════════════════╧══════════════════╧═════════════╧══════════════╛

But the same are not visible on dashboard at Block>>NVMe/TCP>>Subsystems
 

Version-Release number of selected component (if applicable):
[ceph: root@ceph-nvme-1-wzk17m-node1-installer /]# ceph version
ceph version 19.1.0-16.el9cp (2cf55deb566fb4f798204b7f111579c3e240b84a) squid

nvmeof - cp.stg.icr.io/cp/ibm-ceph/nvmeof-rhel9:1.2.17-6


How reproducible: Always


Steps to Reproduce:
1. Deploy nvmeof service with "orch apply" command on admin node and create subsystems - successful deployment and subsystem creation, even service is visible on dashboard at Administration >> Services
2. Navigate to Block>>NVMe/TCP>>Subsystems to see the subsystems created via CLI


Actual results: No subsystems are seen at Block>>NVMe/TCP>>Subsystems


Expected results: we expect CLI and dashboard are intact and same configuration is seen throughout


Additional info: See attachment

Comment 8 errata-xmlrpc 2024-11-25 09:04:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 8.0 security, bug fix, and enhancement updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:10216