Bug 2317173 - IOs Fail on newly added GW when it is assigned to the same grp id as deleting GW(scaled down) as listener addition is not handled [NEEDINFO]
Summary: IOs Fail on newly added GW when it is assigned to the same grp id as deleting...
Keywords:
Status: VERIFIED
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: NVMeOF
Version: 8.0
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 8.0z2
Assignee: Leonid
QA Contact: Rahul Lepakshi
ceph-doc-bot
URL:
Whiteboard:
Depends On:
Blocks: 2317218
TreeView+ depends on / blocked
 
Reported: 2024-10-08 08:27 UTC by Rahul Lepakshi
Modified: 2025-04-12 08:28 UTC (History)
14 users (show)

Fixed In Version: ceph-nvmeof-container-1.3.4-1
Doc Type: Known Issue
Doc Text:
.Adding listeners for other hosts is now supported Previously, if the number of NVMe-oF gateways within a gateway group was reduced and then increased, also known as a scale-down followed by a scale-up, issues occurred with I/O operations on the new gateway nodes. As a result, the new gateway nodes experienced I/O disruptions. With the fix in current gateway version 1.3.4, adding listeners for other hosts is now supported. Listeners can now be defined when replacing a node, with the new node applied to the group afterward. For more information, see Managing load balancing with scale-up and scale-down.
Clone Of:
Environment:
Last Closed:
Embargoed:
rlepaksh: needinfo-
rlepaksh: needinfo? (lchernin)
rlepaksh: needinfo? (acaro)


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-9942 0 None None None 2024-10-08 08:29:57 UTC

Description Rahul Lepakshi 2024-10-08 08:27:56 UTC
Description of problem:
Refer https://bugzilla.redhat.com/show_bug.cgi?id=2310380#c0 step4 - In case that another GW is added while Deleting still exits on a GW, the new GW will be assigned to the same grp id (Failback)-> assignment happens correctly but post Failback to newly added GW, IOs cannot continue on namespaces of this ana_grp as listener addition on this new GW is not handled and hence these namespaces cannot be discovered on initiators. 

Does that mean that user has to manually create listener in this scenario?


Version-Release number of selected component (if applicable):
ceph version 19.2.0-10.el9cp 
container_image_nvmeof         cp.stg.icr.io/cp/ibm-ceph/nvmeof-rhel9:1.3.2-11

How reproducible: Always


Steps to Reproduce:
1. Perform scale down of GW(s) with a GW group using orch apply command 
 and follow steps as at https://bugzilla.redhat.com/show_bug.cgi?id=2310380#c0
2. On initiator, rediscover the GW groups using "nvme discover -t tcp -a 10.0.64.180 -s 8009 -l 1800" 
3. But namespaces assigned to ana_grp ID of newly added GW disappear and IOs fail

Actual results: namespaces assigned to ana_grp ID of newly added GW disappear and IOs fail


Expected results: After https://bugzilla.redhat.com/show_bug.cgi?id=2310380#c0 step4(Failback) and re-discovery of GWs from initiator, namespaces assigned to ana_grp ID of newly added GW should not get disappeared and IOs should seamlessly re-start


Additional info:

Gateway

[ceph: root@ceph-8-0-bug-tx4ujy-node1-installer /]# ceph nvme-gw show nvmeof_pool group1
{
    "epoch": 74,
    "pool": "nvmeof_pool",
    "group": "group1",
    "features": "LB",
    "num gws": 5,
    "Anagrp list": "[ 1 2 3 4 5 ]",
    "num-namespaces": 20,
    "Created Gateways:": [
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node3.cabany",
            "anagrp-id": 1,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: ACTIVE ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node4.aojsfn",
            "anagrp-id": 2,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: ACTIVE ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node5.sgkpbr",
            "anagrp-id": 3,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: ACTIVE ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node6.wxlqie",
            "anagrp-id": 4,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: ACTIVE ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node7.wuqhxy",
            "anagrp-id": 5,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: ACTIVE "
        }
    ]
}
[ceph: root@ceph-8-0-bug-tx4ujy-node1-installer /]# ceph orch apply nvmeof nvmeof_pool group1 --placement='ceph-8-0-bug-tx4ujy-node3 ceph-8-0-bug-tx4ujy-node4 ceph-8-0-bug-tx4ujy-node5 c                                                eph-8-0-bug-tx4ujy-node6'
Scheduled nvmeof.nvmeof_pool.group1 update...

[ceph: root@ceph-8-0-bug-tx4ujy-node1-installer /]# ceph nvme-gw show nvmeof_pool group1
{
    "epoch": 76,
    "pool": "nvmeof_pool",
    "group": "group1",
    "features": "LB",
    "num gws": 5,
    "Anagrp list": "[ 1 2 3 4 ]",
    "num-namespaces": 20,
    "Created Gateways:": [
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node3.cabany",
            "anagrp-id": 1,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: ACTIVE ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: ACTIVE "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node4.aojsfn",
            "anagrp-id": 2,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: ACTIVE ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node5.sgkpbr",
            "anagrp-id": 3,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: ACTIVE ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node6.wxlqie",
            "anagrp-id": 4,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: ACTIVE ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node7.wuqhxy",
            "anagrp-id": 5,
            "num-namespaces": 4,
            "performed-full-startup": 0,
            "Availability": "DELETING",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY "
        }
    ]
}

[ceph: root@ceph-8-0-bug-tx4ujy-node1-installer /]# ceph orch apply nvmeof nvmeof_pool group1 --placement='ceph-8-0-bug-tx4ujy-node3 ceph-8-0-bug-tx4ujy-node4 ceph-8-0-bug-tx4ujy-node5 c                                                eph-8-0-bug-tx4ujy-node6 ceph-8-0-bug-tx4ujy-node8'
Scheduled nvmeof.nvmeof_pool.group1 update...
[ceph: root@ceph-8-0-bug-tx4ujy-node1-installer /]# ceph nvme-gw show nvmeof_pool group1
{
    "epoch": 76,
    "pool": "nvmeof_pool",
    "group": "group1",
    "features": "LB",
    "num gws": 5,
    "Anagrp list": "[ 1 2 3 4 ]",
    "num-namespaces": 20,
    "Created Gateways:": [
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node3.cabany",
            "anagrp-id": 1,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: ACTIVE ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: ACTIVE "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node4.aojsfn",
            "anagrp-id": 2,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: ACTIVE ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node5.sgkpbr",
            "anagrp-id": 3,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: ACTIVE ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node6.wxlqie",
            "anagrp-id": 4,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: ACTIVE ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node7.wuqhxy",
            "anagrp-id": 5,
            "num-namespaces": 4,
            "performed-full-startup": 0,
            "Availability": "DELETING",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY "
        }
    ]
}

[ceph: root@ceph-8-0-bug-tx4ujy-node1-installer /]# ceph nvme-gw show nvmeof_pool group1
{
    "epoch": 78,
    "pool": "nvmeof_pool",
    "group": "group1",
    "features": "LB",
    "num gws": 5,
    "Anagrp list": "[ 1 2 3 4 5 ]",
    "num-namespaces": 20,
    "Created Gateways:": [
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node3.cabany",
            "anagrp-id": 1,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: ACTIVE ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: WAIT_FAILBACK_PREPARED "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node4.aojsfn",
            "anagrp-id": 2,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: ACTIVE ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node5.sgkpbr",
            "anagrp-id": 3,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: ACTIVE ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node6.wxlqie",
            "anagrp-id": 4,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: ACTIVE ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node8.agdzmb",
            "anagrp-id": 5,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: OWNER_FAILBACK_PREPARED "
        }
    ]
}

[ceph: root@ceph-8-0-bug-tx4ujy-node1-installer /]# ceph nvme-gw show nvmeof_pool group1
{
    "epoch": 79,
    "pool": "nvmeof_pool",
    "group": "group1",
    "features": "LB",
    "num gws": 5,
    "Anagrp list": "[ 1 2 3 4 5 ]",
    "num-namespaces": 20,
    "Created Gateways:": [
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node3.cabany",
            "anagrp-id": 1,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: ACTIVE ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node4.aojsfn",
            "anagrp-id": 2,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: ACTIVE ,  3: STANDBY ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node5.sgkpbr",
            "anagrp-id": 3,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: ACTIVE ,  4: STANDBY ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node6.wxlqie",
            "anagrp-id": 4,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: ACTIVE ,  5: STANDBY "
        },
        {
            "gw-id": "client.nvmeof.nvmeof_pool.group1.ceph-8-0-bug-tx4ujy-node8.agdzmb",
            "anagrp-id": 5,
            "num-namespaces": 4,
            "performed-full-startup": 1,
            "Availability": "AVAILABLE",
            "ana states": " 1: STANDBY ,  2: STANDBY ,  3: STANDBY ,  4: STANDBY ,  5: ACTIVE "
        }
    ]
}




Initiators

before scale down

[root@ceph-8-0-bug-tx4ujy-node11 ~]# nvme list
Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
/dev/nvme11n2         /dev/ng11n2           3                    Ceph bdev Controller                     0x2          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme11n3         /dev/ng11n3           3                    Ceph bdev Controller                     0x3          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme11n4         /dev/ng11n4           3                    Ceph bdev Controller                     0x4          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme11n5         /dev/ng11n5           3                    Ceph bdev Controller                     0x5          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme16n2         /dev/ng16n2           4                    Ceph bdev Controller                     0x2          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme16n3         /dev/ng16n3           4                    Ceph bdev Controller                     0x3          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme16n4         /dev/ng16n4           4                    Ceph bdev Controller                     0x4          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme16n5         /dev/ng16n5           4                    Ceph bdev Controller                     0x5          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme1n1          /dev/ng1n1            1                    Ceph bdev Controller                     0x1          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme1n2          /dev/ng1n2            1                    Ceph bdev Controller                     0x2          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme1n3          /dev/ng1n3            1                    Ceph bdev Controller                     0x3          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme1n4          /dev/ng1n4            1                    Ceph bdev Controller                     0x4          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme6n2          /dev/ng6n2            2                    Ceph bdev Controller                     0x2          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme6n3          /dev/ng6n3            2                    Ceph bdev Controller                     0x3          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme6n4          /dev/ng6n4            2                    Ceph bdev Controller                     0x4          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme6n5          /dev/ng6n5            2                    Ceph bdev Controller                     0x5          1.10  TB /   1.10  TB    512   B +  0 B   24.01


After scale down -- we lose few namespaces as below compared to above nvme list

[root@ceph-8-0-bug-tx4ujy-node11 ~]# nvme list
Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev
--------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
/dev/nvme11n2         /dev/ng11n2           3                    Ceph bdev Controller                     0x2          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme11n3         /dev/ng11n3           3                    Ceph bdev Controller                     0x3          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme11n4         /dev/ng11n4           3                    Ceph bdev Controller                     0x4          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme11n5         /dev/ng11n5           3                    Ceph bdev Controller                     0x5          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme16n2         /dev/ng16n2           4                    Ceph bdev Controller                     0x2          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme16n3         /dev/ng16n3           4                    Ceph bdev Controller                     0x3          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme16n4         /dev/ng16n4           4                    Ceph bdev Controller                     0x4          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme16n5         /dev/ng16n5           4                    Ceph bdev Controller                     0x5          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme1n1          /dev/ng1n1            1                    Ceph bdev Controller                     0x1          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme1n2          /dev/ng1n2            1                    Ceph bdev Controller                     0x2          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme1n3          /dev/ng1n3            1                    Ceph bdev Controller                     0x3          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme1n4          /dev/ng1n4            1                    Ceph bdev Controller                     0x4          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme6n2          /dev/ng6n2            2                    Ceph bdev Controller                     0x2          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme6n3          /dev/ng6n3            2                    Ceph bdev Controller                     0x3          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme6n4          /dev/ng6n4            2                    Ceph bdev Controller                     0x4          1.10  TB /   1.10  TB    512   B +  0 B   24.01
/dev/nvme6n5          /dev/ng6n5            2                    Ceph bdev Controller                     0x5          1.10  TB /   1.10  TB    512   B +  0 B   24.01


Newly added GW is not discovered - 10.0.65.140 should be replaced with new GW IP with live status
[root@ceph-8-0-bug-tx4ujy-node11 ~]# nvme list-subsys
nvme-subsys16 - NQN=nqn.2016-06.io.spdk:cnode4.group1
                hostnqn=nqn.2014-08.org.nvmexpress:uuid:851aaba3-6458-4ffd-b4bb-146dba666716
                iopolicy=numa
\
 +- nvme20 tcp traddr=10.0.65.140,trsvcid=4420 connecting
 +- nvme19 tcp traddr=10.0.64.180,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme18 tcp traddr=10.0.65.226,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme17 tcp traddr=10.0.67.122,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme16 tcp traddr=10.0.67.133,trsvcid=4420,src_addr=10.0.66.124 live

nvme-subsys11 - NQN=nqn.2016-06.io.spdk:cnode3.group1
                hostnqn=nqn.2014-08.org.nvmexpress:uuid:851aaba3-6458-4ffd-b4bb-146dba666716
                iopolicy=numa
\
 +- nvme15 tcp traddr=10.0.65.140,trsvcid=4420 connecting
 +- nvme14 tcp traddr=10.0.64.180,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme13 tcp traddr=10.0.65.226,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme12 tcp traddr=10.0.67.122,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme11 tcp traddr=10.0.67.133,trsvcid=4420,src_addr=10.0.66.124 live

nvme-subsys6 - NQN=nqn.2016-06.io.spdk:cnode2.group1
               hostnqn=nqn.2014-08.org.nvmexpress:uuid:851aaba3-6458-4ffd-b4bb-146dba666716
               iopolicy=numa
\
 +- nvme9 tcp traddr=10.0.64.180,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme8 tcp traddr=10.0.65.226,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme7 tcp traddr=10.0.67.122,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme6 tcp traddr=10.0.67.133,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme10 tcp traddr=10.0.65.140,trsvcid=4420 connecting

nvme-subsys1 - NQN=nqn.2016-06.io.spdk:cnode1.group1
               hostnqn=nqn.2014-08.org.nvmexpress:uuid:851aaba3-6458-4ffd-b4bb-146dba666716
               iopolicy=numa
\
 +- nvme5 tcp traddr=10.0.65.140,trsvcid=4420 connecting
 +- nvme4 tcp traddr=10.0.64.180,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme3 tcp traddr=10.0.65.226,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme2 tcp traddr=10.0.67.122,trsvcid=4420,src_addr=10.0.66.124 live
 +- nvme1 tcp traddr=10.0.67.133,trsvcid=4420,src_addr=10.0.66.124 live


Note You need to log in before you can comment on or make changes to this bug.