Bug 2274703 - Failure adding namespace to subsystem on nvmeof Gateway
Summary: Failure adding namespace to subsystem on nvmeof Gateway
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: NVMeOF
Version: 7.1
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 7.1
Assignee: Aviv Caro
QA Contact: Manohar Murthy
ceph-doc-bot
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2024-04-12 10:23 UTC by Rahul Lepakshi
Modified: 2024-06-13 14:31 UTC (History)
3 users (show)

Fixed In Version: ceph-18.2.1-149.el9cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-06-13 14:31:34 UTC
Embargoed:
rlepaksh: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-8800 0 None None None 2024-04-12 10:24:54 UTC
Red Hat Product Errata RHSA-2024:3925 0 None None None 2024-06-13 14:31:36 UTC

Comment 6 Rahul Lepakshi 2024-04-29 06:07:20 UTC
Closing this BZ as issue was not seen with latest builds 
Pass log - http://magna002.ceph.redhat.com/cephci-jenkins/test-runs/openstack/IBM/7.1/rhel-9/Regression/18.2.1-149/nvmeotcp/105/tier-3_2-nvmeof-gw_8-sub_ns/
http://magna002.ceph.redhat.com/cephci-jenkins/test-runs/openstack/IBM/7.1/rhel-9/Regression/18.2.1-149/nvmeotcp/105/tier-3_2-nvmeof-gw_2-sub_ns/

2024-04-25 13:46:22,406 (cephci.test_ceph_nvmeof_gateway_sub_scale) [INFO] - cephci.IBM.7.1.rhel-9.Regression.18.2.1-149.nvmeotcp.105.cephci.ceph.nvmegw_cli.execute.py:16 - NVMe CLI command : namespace add
2024-04-25 13:46:22,407 (cephci.test_ceph_nvmeof_gateway_sub_scale) [INFO] - cephci.IBM.7.1.rhel-9.Regression.18.2.1-149.nvmeotcp.105.cephci.ceph.ceph.py:1568 - Running command podman run --quiet --rm cp.stg.icr.io/cp/ibm-ceph/nvmeof-cli-rhel9:1.2.4-1  --server-address 10.0.195.98 --server-port 5500 namespace add  --rbd-image L5N6-image200 --nsid 200 --rbd-pool rbd --subsystem nqn.2016-06.io.spdk:cnode2 on 10.0.195.98 timeout 600
2024-04-25 13:46:23,540 (cephci.test_ceph_nvmeof_gateway_sub_scale) [INFO] - cephci.IBM.7.1.rhel-9.Regression.18.2.1-149.nvmeotcp.105.cephci.ceph.ceph.py:1602 - Command completed successfully
2024-04-25 13:46:23,548 (cephci.test_ceph_nvmeof_gateway_sub_scale) [INFO] - cephci.IBM.7.1.rhel-9.Regression.18.2.1-149.nvmeotcp.105.cephci.ceph.nvmegw_cli.execute.py:36 - ('', 'Adding namespace 200 to nqn.2016-06.io.spdk:cnode2, load balancing group 0: Successful\n')
2024-04-25 13:46:23,549 (cephci.test_ceph_nvmeof_gateway_sub_scale) [INFO] - cephci.IBM.7.1.rhel-9.Regression.18.2.1-149.nvmeotcp.105.cephci.ceph.nvmegw_cli.execute.py:16 - NVMe CLI command : namespace list
2024-04-25 13:46:23,550 (cephci.test_ceph_nvmeof_gateway_sub_scale) [INFO] - cephci.IBM.7.1.rhel-9.Regression.18.2.1-149.nvmeotcp.105.cephci.ceph.ceph.py:1568 - Running command podman run --quiet --rm cp.stg.icr.io/cp/ibm-ceph/nvmeof-cli-rhel9:1.2.4-1  --format json --server-address 10.0.195.98 --server-port 5500 namespace list  --nsid 200 --subsystem nqn.2016-06.io.spdk:cnode2 on 10.0.195.98 timeout 600
2024-04-25 13:46:24,895 (cephci.test_ceph_nvmeof_gateway_sub_scale) [INFO] - cephci.IBM.7.1.rhel-9.Regression.18.2.1-149.nvmeotcp.105.cephci.ceph.ceph.py:1602 - Command completed successfully
2024-04-25 13:46:24,896 (cephci.test_ceph_nvmeof_gateway_sub_scale) [INFO] - cephci.IBM.7.1.rhel-9.Regression.18.2.1-149.nvmeotcp.105.cephci.ceph.nvmegw_cli.execute.py:36 - ('', '{\n    "error_message": "Success",\n    "subsystem_nqn": "nqn.2016-06.io.spdk:cnode2",\n    "namespaces": [\n        {\n            "nsid": 200,\n            "bdev_name": "bdev_84e30207-7a60-4657-b126-b2a59d036b76",\n            "rbd_image_name": "L5N6-image200",\n            "rbd_pool_name": "rbd",\n            "load_balancing_group": 1,\n            "block_size": 512,\n            "rbd_image_size": "1099511627776",\n            "uuid": "84e30207-7a60-4657-b126-b2a59d036b76",\n            "rw_ios_per_second": "0",\n            "rw_mbytes_per_second": "0",\n            "r_mbytes_per_second": "0",\n            "w_mbytes_per_second": "0"\n        }\n    ],\n    "status": 0\n}\n')

Comment 7 errata-xmlrpc 2024-06-13 14:31:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:3925


Note You need to log in before you can comment on or make changes to this bug.