Bug 2304808 - Fail orch apply nvmeof command while trying to deploy nvmeof service on a GW node which is already part of another GW group
Summary: Fail orch apply nvmeof command while trying to deploy nvmeof service on a GW ...
Keywords:
Status: CLOSED DUPLICATE of bug 2283976
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: NVMeOF
Version: 8.0
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 8.0
Assignee: Aviv Caro
QA Contact: Manohar Murthy
ceph-doc-bot
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2024-08-14 07:25 UTC by Rahul Lepakshi
Modified: 2024-09-07 15:52 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-09-07 15:52:09 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-9728 0 None None None 2024-09-07 15:52:48 UTC

Description Rahul Lepakshi 2024-08-14 07:25:03 UTC
Description of problem:
Fail orch apply nvmeof command with a message "GW '<ceph_node>' is already part of another GW group, please place another node to deploy this service" or similar message as we know we can never have same GWs/ ceph node part of 2 nvmeof Gateway groups and apply command should fail and user has to be notified about this. But anyways deployment is not successfull but will be always stuck at 1/2 RUNNING daemons


[ceph: root@ceph-nvme-1-f3l8fb-node1-installer /]# ceph orch ls | grep nvmeof
nvmeof.nvmeof.-            ?:4420,5500,8009      1/1  4m ago     70m   ceph-nvme-1-f3l8fb-node2
nvmeof.nvmeof..            ?:4420,5500,8009      1/1  7m ago     71m   ceph-nvme-1-f3l8fb-node6
nvmeof.nvmeof.1            ?:4420,5500,8009      1/1  2s ago     25m   ceph-nvme-1-f3l8fb-node5
nvmeof.nvmeof._            ?:4420,5500,8009      1/1  7m ago     71m   ceph-nvme-1-f3l8fb-node4
nvmeof.nvmeof.gw2          ?:4420,5500,8009      1/1  7m ago     92m   ceph-nvme-1-f3l8fb-node3
[ceph: root@ceph-nvme-1-f3l8fb-node1-installer /]# ceph orch apply nvmeof nvmeof 1 --placement="ceph-nvme-1-f3l8fb-node5,ceph-nvme-1-f3l8fb-node3"
Scheduled nvmeof.nvmeof.1 update...
[ceph: root@ceph-nvme-1-f3l8fb-node1-installer /]# ceph orch ls | grep nvmeof
nvmeof.nvmeof.-            ?:4420,5500,8009      1/1  5m ago     71m   ceph-nvme-1-f3l8fb-node2
nvmeof.nvmeof..            ?:4420,5500,8009      1/1  8m ago     72m   ceph-nvme-1-f3l8fb-node6
nvmeof.nvmeof.1            ?:4420,5500,8009      1/2  84s ago    2s    ceph-nvme-1-f3l8fb-node5;ceph-nvme-1-f3l8fb-node3 -----> node3 is part of 2 groups
nvmeof.nvmeof._            ?:4420,5500,8009      1/1  8m ago     72m   ceph-nvme-1-f3l8fb-node4
nvmeof.nvmeof.gw2          ?:4420,5500,8009      1/1  8m ago     93m   ceph-nvme-1-f3l8fb-node3    -----> node3 is part of 2 groups

Version-Release number of selected component (if applicable):
ceph version 19.1.0-22.el9cp (e5b7dfedb7d8a66d166eb0f98361f71bdb7905ad) squid (rc)
cp.stg.icr.io/cp/ibm-ceph/nvmeof-rhel9:1.2.17-9 

How reproducible: Always


Steps to Reproduce:
1. Deploy a new nvmeof service with placement of a node/ Gateway node which is part of another GW group
2.
3.

Actual results: orch apply command schedules for nvmeof deployment


Expected results: orch apply command should not schedule for nvmeof deployment and error out with a message


Additional info:


Note You need to log in before you can comment on or make changes to this bug.