Bug 2274703

Summary: Failure adding namespace to subsystem on nvmeof Gateway
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Rahul Lepakshi <rlepaksh>
Component: NVMeOFAssignee: Aviv Caro <acaro>
Status: CLOSED ERRATA QA Contact: Manohar Murthy <mmurthy>
Severity: urgent Docs Contact: ceph-doc-bot <ceph-doc-bugzilla>
Priority: unspecified    
Version: 7.1CC: akraj, cephqe-warriors, tserlin
Target Milestone: ---Flags: rlepaksh: needinfo-
Target Release: 7.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-18.2.1-149.el9cp Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-06-13 14:31:34 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Comment 6 Rahul Lepakshi 2024-04-29 06:07:20 UTC
Closing this BZ as issue was not seen with latest builds 
Pass log - http://magna002.ceph.redhat.com/cephci-jenkins/test-runs/openstack/IBM/7.1/rhel-9/Regression/18.2.1-149/nvmeotcp/105/tier-3_2-nvmeof-gw_8-sub_ns/
http://magna002.ceph.redhat.com/cephci-jenkins/test-runs/openstack/IBM/7.1/rhel-9/Regression/18.2.1-149/nvmeotcp/105/tier-3_2-nvmeof-gw_2-sub_ns/

2024-04-25 13:46:22,406 (cephci.test_ceph_nvmeof_gateway_sub_scale) [INFO] - cephci.IBM.7.1.rhel-9.Regression.18.2.1-149.nvmeotcp.105.cephci.ceph.nvmegw_cli.execute.py:16 - NVMe CLI command : namespace add
2024-04-25 13:46:22,407 (cephci.test_ceph_nvmeof_gateway_sub_scale) [INFO] - cephci.IBM.7.1.rhel-9.Regression.18.2.1-149.nvmeotcp.105.cephci.ceph.ceph.py:1568 - Running command podman run --quiet --rm cp.stg.icr.io/cp/ibm-ceph/nvmeof-cli-rhel9:1.2.4-1  --server-address 10.0.195.98 --server-port 5500 namespace add  --rbd-image L5N6-image200 --nsid 200 --rbd-pool rbd --subsystem nqn.2016-06.io.spdk:cnode2 on 10.0.195.98 timeout 600
2024-04-25 13:46:23,540 (cephci.test_ceph_nvmeof_gateway_sub_scale) [INFO] - cephci.IBM.7.1.rhel-9.Regression.18.2.1-149.nvmeotcp.105.cephci.ceph.ceph.py:1602 - Command completed successfully
2024-04-25 13:46:23,548 (cephci.test_ceph_nvmeof_gateway_sub_scale) [INFO] - cephci.IBM.7.1.rhel-9.Regression.18.2.1-149.nvmeotcp.105.cephci.ceph.nvmegw_cli.execute.py:36 - ('', 'Adding namespace 200 to nqn.2016-06.io.spdk:cnode2, load balancing group 0: Successful\n')
2024-04-25 13:46:23,549 (cephci.test_ceph_nvmeof_gateway_sub_scale) [INFO] - cephci.IBM.7.1.rhel-9.Regression.18.2.1-149.nvmeotcp.105.cephci.ceph.nvmegw_cli.execute.py:16 - NVMe CLI command : namespace list
2024-04-25 13:46:23,550 (cephci.test_ceph_nvmeof_gateway_sub_scale) [INFO] - cephci.IBM.7.1.rhel-9.Regression.18.2.1-149.nvmeotcp.105.cephci.ceph.ceph.py:1568 - Running command podman run --quiet --rm cp.stg.icr.io/cp/ibm-ceph/nvmeof-cli-rhel9:1.2.4-1  --format json --server-address 10.0.195.98 --server-port 5500 namespace list  --nsid 200 --subsystem nqn.2016-06.io.spdk:cnode2 on 10.0.195.98 timeout 600
2024-04-25 13:46:24,895 (cephci.test_ceph_nvmeof_gateway_sub_scale) [INFO] - cephci.IBM.7.1.rhel-9.Regression.18.2.1-149.nvmeotcp.105.cephci.ceph.ceph.py:1602 - Command completed successfully
2024-04-25 13:46:24,896 (cephci.test_ceph_nvmeof_gateway_sub_scale) [INFO] - cephci.IBM.7.1.rhel-9.Regression.18.2.1-149.nvmeotcp.105.cephci.ceph.nvmegw_cli.execute.py:36 - ('', '{\n    "error_message": "Success",\n    "subsystem_nqn": "nqn.2016-06.io.spdk:cnode2",\n    "namespaces": [\n        {\n            "nsid": 200,\n            "bdev_name": "bdev_84e30207-7a60-4657-b126-b2a59d036b76",\n            "rbd_image_name": "L5N6-image200",\n            "rbd_pool_name": "rbd",\n            "load_balancing_group": 1,\n            "block_size": 512,\n            "rbd_image_size": "1099511627776",\n            "uuid": "84e30207-7a60-4657-b126-b2a59d036b76",\n            "rw_ios_per_second": "0",\n            "rw_mbytes_per_second": "0",\n            "r_mbytes_per_second": "0",\n            "w_mbytes_per_second": "0"\n        }\n    ],\n    "status": 0\n}\n')

Comment 7 errata-xmlrpc 2024-06-13 14:31:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:3925