Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2317173

Summary: IOs Fail on newly added GW when it is assigned to the same grp id as deleting GW(scaled down) as listener addition is not handled
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Rahul Lepakshi <rlepaksh>
Component: NVMeOFAssignee: Leonid <lchernin>
Status: CLOSED UPSTREAM QA Contact: Rahul Lepakshi <rlepaksh>
Severity: urgent Docs Contact: ceph-doc-bot <ceph-doc-bugzilla>
Priority: unspecified    
Version: 8.0CC: acaro, aindenba, bdavidov, bhkaur, bkunal, cephqe-warriors, gbregman, kjosy, lchernin, linuxkidd, pdhange, rpollack, tserlin, vumrao
Target Milestone: ---Keywords: External, TestBlocker
Target Release: 8.0z2Flags: rlepaksh: needinfo-
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-nvmeof-container-1.3.4-1 Doc Type: Known Issue
Doc Text:
.Adding listeners for other hosts is now supported Previously, if the number of NVMe-oF gateways within a gateway group was reduced and then increased, also known as a scale-down followed by a scale-up, issues occurred with I/O operations on the new gateway nodes. As a result, the new gateway nodes experienced I/O disruptions. With the fix in current gateway version 1.3.4, adding listeners for other hosts is now supported. Listeners can now be defined when replacing a node, with the new node applied to the group afterward. For more information, see Managing load balancing with scale-up and scale-down.
Story Points: ---
Clone Of: Environment:
Last Closed: 2026-03-04 08:55:02 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2317218    

Description Rahul Lepakshi 2024-10-08 08:27:56 UTC
Description of problem:
Refer https://bugzilla.redhat.com/show_bug.cgi?id=2310380#c0 step4 - In case that another GW is added while Deleting still exits on a GW, the new GW will be assigned to the same grp id (Failback)-> assignment happens correctly but post Failback to newly added GW, IOs cannot continue on namespaces of this ana_grp as listener addition on this new GW is not handled and hence these namespaces cannot be discovered on initiators. 

Does that mean that user has to manually create listener in this scenario?


Version-Release number of selected component (if applicable):
ceph version 19.2.0-10.el9cp 
container_image_nvmeof         cp.stg.icr.io/cp/ibm-ceph/nvmeof-rhel9:1.3.2-11

How reproducible: Always


Steps to Reproduce:
1. Perform scale down of GW(s) with a GW group using orch apply command 
 and follow steps as at https://bugzilla.redhat.com/show_bug.cgi?id=2310380#c0
2. On initiator, rediscover the GW groups using "nvme discover -t tcp -a 10.0.64.180 -s 8009 -l 1800" 
3. But namespaces assigned to ana_grp ID of newly added GW disappear and IOs fail

Actual results: namespaces assigned to ana_grp ID of newly added GW disappear and IOs fail


Expected results: After https://bugzilla.redhat.com/show_bug.cgi?id=2310380#c0 step4(Failback) and re-discovery of GWs from initiator, namespaces assigned to ana_grp ID of newly added GW should not get disappeared and IOs should seamlessly re-start


Additional info:

Gateway

[ceph: root@ceph-8-0-bug-tx4ujy-node1-installer /]# ceph nvme-gw show nvmeof_pool group1
{
    "epoch": 74,
    "pool": "nvmeof_pool",
    "group": "group1",
    "features": "LB",
    "num gws": 5,
    "Anagrp list": "[ 1 2 3 4 5 ]",
    "num-namespaces": 20,
    "Created Gateways:": [
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node3.cabany",
            "anagrp-id": 1,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: ACTIVE ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node4.aojsfn",
            "anagrp-id": 2,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: ACTIVE ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node5.sgkpbr",
            "anagrp-id": 3,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: ACTIVE ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node6.wxlqie",
            "anagrp-id": 4,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: ACTIVE ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node7.wuqhxy",
            "anagrp-id": 5,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: ACTIVE "
        }
    ]
}
[ceph: root@ceph-8-0-bug-tx4ujy-node1-installer /]# ceph orch apply nvmeof nvmeof_pool group1 --placement='ceph-8-0-bug-tx4ujy-node3 ceph-8-0-bug-tx4ujy-node4 ceph-8-0-bug-tx4ujy-node5 c                                                eph-8-0-bug-tx4ujy-node6'
Scheduled nvmeof.nvmeof_pool.group1 update...

[ceph: root@ceph-8-0-bug-tx4ujy-node1-installer /]# ceph nvme-gw show nvmeof_pool group1
{
    "epoch": 76,
    "pool": "nvmeof_pool",
    "group": "group1",
    "features": "LB",
    "num gws": 5,
    "Anagrp list": "[ 1 2 3 4 ]",
    "num-namespaces": 20,
    "Created Gateways:": [
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node3.cabany",
            "anagrp-id": 1,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: ACTIVE ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: ACTIVE "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node4.aojsfn",
            "anagrp-id": 2,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: ACTIVE ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node5.sgkpbr",
            "anagrp-id": 3,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: ACTIVE ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node6.wxlqie",
            "anagrp-id": 4,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: ACTIVE ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node7.wuqhxy",
            "anagrp-id": 5,
            "num-namespaces": 4,
            "performed-full-startup": 0,
            "Availability": "DELETING",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY "
        }
    ]
}

[ceph: root@ceph-8-0-bug-tx4ujy-node1-installer /]# ceph orch apply nvmeof nvmeof_pool group1 --placement='ceph-8-0-bug-tx4ujy-node3 ceph-8-0-bug-tx4ujy-node4 ceph-8-0-bug-tx4ujy-node5 c                                                eph-8-0-bug-tx4ujy-node6 ceph-8-0-bug-tx4ujy-node8'
Scheduled nvmeof.nvmeof_pool.group1 update...
[ceph: root@ceph-8-0-bug-tx4ujy-node1-installer /]# ceph nvme-gw show nvmeof_pool group1
{
    "epoch": 76,
    "pool": "nvmeof_pool",
    "group": "group1",
    "features": "LB",
    "num gws": 5,
    "Anagrp list": "[ 1 2 3 4 ]",
    "num-namespaces": 20,
    "Created Gateways:": [
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node3.cabany",
            "anagrp-id": 1,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: ACTIVE ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: ACTIVE "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node4.aojsfn",
            "anagrp-id": 2,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: ACTIVE ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node5.sgkpbr",
            "anagrp-id": 3,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: ACTIVE ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node6.wxlqie",
            "anagrp-id": 4,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: ACTIVE ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node7.wuqhxy",
            "anagrp-id": 5,
            "num-namespaces": 4,
            "performed-full-startup": 0,
            "Availability": "DELETING",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY "
        }
    ]
}

[ceph: root@ceph-8-0-bug-tx4ujy-node1-installer /]# ceph nvme-gw show nvmeof_pool group1
{
    "epoch": 78,
    "pool": "nvmeof_pool",
    "group": "group1",
    "features": "LB",
    "num gws": 5,
    "Anagrp list": "[ 1 2 3 4 5 ]",
    "num-namespaces": 20,
    "Created Gateways:": [
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node3.cabany",
            "anagrp-id": 1,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: ACTIVE ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: WAIT_FAILBACK_PREPARED "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node4.aojsfn",
            "anagrp-id": 2,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: ACTIVE ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node5.sgkpbr",
            "anagrp-id": 3,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: ACTIVE ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node6.wxlqie",
            "anagrp-id": 4,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: ACTIVE ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node8.agdzmb",
            "anagrp-id": 5,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: OWNER_FAILBACK_PREPARED "
        }
    ]
}

[ceph: root@ceph-8-0-bug-tx4ujy-node1-installer /]# ceph nvme-gw show nvmeof_pool group1
{
    "epoch": 79,
    "pool": "nvmeof_pool",
    "group": "group1",
    "features": "LB",
    "num gws": 5,
    "Anagrp list": "[ 1 2 3 4 5 ]",
    "num-namespaces": 20,
    "Created Gateways:": [
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node3.cabany",
            "anagrp-id": 1,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: ACTIVE ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node4.aojsfn",
            "anagrp-id": 2,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: ACTIVE ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node5.sgkpbr",
            "anagrp-id": 3,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: ACTIVE ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node6.wxlqie",
            "anagrp-id": 4,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: ACTIVE ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node8.agdzmb",
            "anagrp-id": 5,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: ACTIVE "
        }
    ]
}




Initiators

before scale down

[root@ceph-8-0-bug-tx4ujy-node11 ~]# nvme list
Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
/dev/nvme11n2         /dev/ng11n2           3                    Ceph bdev Controller                     0x2          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme11n3         /dev/ng11n3           3                    Ceph bdev Controller                     0x3          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme11n4         /dev/ng11n4           3                    Ceph bdev Controller                     0x4          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme11n5         /dev/ng11n5           3                    Ceph bdev Controller                     0x5          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme16n2         /dev/ng16n2           4                    Ceph bdev Controller                     0x2          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme16n3         /dev/ng16n3           4                    Ceph bdev Controller                     0x3          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme16n4         /dev/ng16n4           4                    Ceph bdev Controller                     0x4          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme16n5         /dev/ng16n5           4                    Ceph bdev Controller                     0x5          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme1n1          /dev/ng1n1            1                    Ceph bdev Controller                     0x1          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme1n2          /dev/ng1n2            1                    Ceph bdev Controller                     0x2          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme1n3          /dev/ng1n3            1                    Ceph bdev Controller                     0x3          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme1n4          /dev/ng1n4            1                    Ceph bdev Controller                     0x4          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme6n2          /dev/ng6n2            2                    Ceph bdev Controller                     0x2          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme6n3          /dev/ng6n3            2                    Ceph bdev Controller                     0x3          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme6n4          /dev/ng6n4            2                    Ceph bdev Controller                     0x4          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme6n5          /dev/ng6n5            2                    Ceph bdev Controller                     0x5          1.10  TB /   1.10  TB    512   B +  0 B   24.01


After scale down -- we lose few namespaces as below compared to above nvme list

[root@ceph-8-0-bug-tx4ujy-node11 ~]# nvme list
Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
/dev/nvme11n2         /dev/ng11n2           3                    Ceph bdev Controller                     0x2          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme11n3         /dev/ng11n3           3                    Ceph bdev Controller                     0x3          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme11n4         /dev/ng11n4           3                    Ceph bdev Controller                     0x4          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme11n5         /dev/ng11n5           3                    Ceph bdev Controller                     0x5          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme16n2         /dev/ng16n2           4                    Ceph bdev Controller                     0x2          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme16n3         /dev/ng16n3           4                    Ceph bdev Controller                     0x3          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme16n4         /dev/ng16n4           4                    Ceph bdev Controller                     0x4          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme16n5         /dev/ng16n5           4                    Ceph bdev Controller                     0x5          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme1n1          /dev/ng1n1            1                    Ceph bdev Controller                     0x1          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme1n2          /dev/ng1n2            1                    Ceph bdev Controller                     0x2          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme1n3          /dev/ng1n3            1                    Ceph bdev Controller                     0x3          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme1n4          /dev/ng1n4            1                    Ceph bdev Controller                     0x4          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme6n2          /dev/ng6n2            2                    Ceph bdev Controller                     0x2          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme6n3          /dev/ng6n3            2                    Ceph bdev Controller                     0x3          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme6n4          /dev/ng6n4            2                    Ceph bdev Controller                     0x4          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme6n5          /dev/ng6n5            2                    Ceph bdev Controller                     0x5          1.10  TB /   1.10  TB    512   B +  0 B   24.01


Newly added GW is not discovered - 10.0.65.140 should be replaced with new GW IP with live status
[root@ceph-8-0-bug-tx4ujy-node11 ~]# nvme list-subsys
nvme-subsys16 - NQN=nqn.2016-06.io.spdk:cnode4.group1
                hostnqn=nqn.2014-08.org.nvmexpress:uuid:851aaba3-6458-4ffd-b4bb-146dba666716
                iopolicy=numa
\
 +- nvme20 tcp traddr=10.0.65.140,trsvcid=4420 connecting
 +- nvme19 tcp traddr=10.0.64.180,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme18 tcp traddr=10.0.65.226,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme17 tcp traddr=10.0.67.122,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme16 tcp traddr=10.0.67.133,trsvcid=4420,src_addr=10.0.66.124 live

nvme-subsys11 - NQN=nqn.2016-06.io.spdk:cnode3.group1
                hostnqn=nqn.2014-08.org.nvmexpress:uuid:851aaba3-6458-4ffd-b4bb-146dba666716
                iopolicy=numa
\
 +- nvme15 tcp traddr=10.0.65.140,trsvcid=4420 connecting
 +- nvme14 tcp traddr=10.0.64.180,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme13 tcp traddr=10.0.65.226,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme12 tcp traddr=10.0.67.122,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme11 tcp traddr=10.0.67.133,trsvcid=4420,src_addr=10.0.66.124 live

nvme-subsys6 - NQN=nqn.2016-06.io.spdk:cnode2.group1
               hostnqn=nqn.2014-08.org.nvmexpress:uuid:851aaba3-6458-4ffd-b4bb-146dba666716
               iopolicy=numa
\
 +- nvme9 tcp traddr=10.0.64.180,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme8 tcp traddr=10.0.65.226,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme7 tcp traddr=10.0.67.122,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme6 tcp traddr=10.0.67.133,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme10 tcp traddr=10.0.65.140,trsvcid=4420 connecting

nvme-subsys1 - NQN=nqn.2016-06.io.spdk:cnode1.group1
               hostnqn=nqn.2014-08.org.nvmexpress:uuid:851aaba3-6458-4ffd-b4bb-146dba666716
               iopolicy=numa
\
 +- nvme5 tcp traddr=10.0.65.140,trsvcid=4420 connecting
 +- nvme4 tcp traddr=10.0.64.180,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme3 tcp traddr=10.0.65.226,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme2 tcp traddr=10.0.67.122,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme1 tcp traddr=10.0.67.133,trsvcid=4420,src_addr=10.0.66.124 live

Comment 18 Red Hat Bugzilla 2026-03-04 08:55:02 UTC
This product has been discontinued or is no longer tracked in Red Hat Bugzilla.

Comment 19 Red Hat Bugzilla 2026-03-05 04:27:47 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days or the product is inactive and locked