Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2283630

Summary: vSphere plugin Datastore expansion is not working
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Krishna Ramaswamy <kramaswa>
Component: NVMeOFAssignee: Aviv Caro <acaro>
Status: CLOSED ERRATA QA Contact: Manohar Murthy <mmurthy>
Severity: urgent Docs Contact: ceph-doc-bot <ceph-doc-bugzilla>
Priority: urgent    
Version: 7.1CC: ceph-eng-bugs, cephqe-warriors, jcaratza, tserlin
Target Milestone: ---   
Target Release: 7.1   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-nvmeof-container-1.2.13-1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-06-13 14:33:00 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Krishna Ramaswamy 2024-05-28 11:36:24 UTC
root@ibm-storage-ceph-plugin-for-vsphere-1 [ /opt/persistent ]# ibm-plugin status
Plugin Version: 1.0.0.0

Plugin Build: 2024_05_27-1227

Plugin Registered: True

Registered vCenters:
+----------------------------------------+-----------------------------+
|                  FQDN                  |           Username          |
+----------------------------------------+-----------------------------+
| cephqevcenter70.lab.eng.blr.redhat.com | Administrator |
+----------------------------------------+-----------------------------+

Network Settings:
+----------+---------------------------------------+
| DHCP     | yes                                   |
| Hostname | ibm-storage-ceph-plugin-for-vsphere-1 |
+----------+---------------------------------------+
root@ibm-storage-ceph-plugin-for-vsphere-1 [ /opt/persistent ]# 

Log Details:

2024-05-28 11:32:06,191 - datastore_endpoints.py[line:258] - vsphere-plugin.datastore_endpoints - INFO : PATCH /api/datastore/expand
2024-05-28 11:32:06,192 - soap_functions.py[line:30] - vsphere-plugin.soap_functions - INFO : Sending clone request to cephqevcenter70.lab.eng.blr.redhat.com.
2024-05-28 11:32:06,269 - ceph_manager.py[line:508] - vsphere-plugin.ceph_manager - INFO : Sending command: https://cephqe-node1.lab.eng.blr.redhat.com:8443/api/nvmeof/subsystem/nqn.2016-06.io.spdk:cnode1/namespace
2024-05-28 11:32:06,339 - ceph_manager.py[line:515] - vsphere-plugin.ceph_manager - INFO : Storage system 9932a3a2-1817-11ef-abae-4c5262033c3d response for command https://cephqe-node1.lab.eng.blr.redhat.com:8443/api/nvmeof/subsystem/nqn.2016-06.io.spdk:cnode1/namespace
2024-05-28 11:32:06,342 - ceph_manager.py[line:542] - vsphere-plugin.ceph_manager - INFO : Sending command: https://cephqe-node1.lab.eng.blr.redhat.com:8443/api/nvmeof/subsystem/nqn.2016-06.io.spdk:cnode1/namespace/7
2024-05-28 11:32:06,365 - ceph_manager.py[line:550] - vsphere-plugin.ceph_manager - INFO : Storage system 9932a3a2-1817-11ef-abae-4c5262033c3d response for command https://cephqe-node1.lab.eng.blr.redhat.com:8443/api/nvmeof/subsystem/nqn.2016-06.io.spdk:cnode1/namespace/7
2024-05-28 11:32:06,366 - ceph_manager.py[line:566] - vsphere-plugin.ceph_manager - ERROR : Caught HTTPStatusError with status_code 400 and detail {"detail": "Failure resizing namespace using NSID 7 on nqn.2016-06.io.spdk:cnode1: new size must be aligned to MiBs", "code": "22", "component": "nvmeof"}
2024-05-28 11:32:06,366 - ceph_exception_manager.py[line:57] - vsphere-plugin.ceph_exception_manager - ERROR : Status code: 400, detail: {"detail": "Failure resizing namespace using NSID 7 on nqn.2016-06.io.spdk:cnode1: new size must be aligned to MiBs", "code": "22", "component": "nvmeof"}
Traceback (most recent call last):
  File "/app/ceph_manager.py", line 555, in _make_patch_request
    response.raise_for_status()
  File "/usr/local/lib/python3.11/site-packages/httpx/_models.py", line 758, in raise_for_status
    raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '400 Bad Request' for url 'https://cephqe-node1.lab.eng.blr.redhat.com:8443/api/nvmeof/subsystem/nqn.2016-06.io.spdk:cnode1/namespace/7'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/app/ceph_exception_manager.py", line 53, in wrapper
    return await fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/datastore_endpoints.py", line 33, in make_basic_request
    return await fs.make_basic_request(command)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/ceph_manager.py", line 233, in make_basic_request
    response = await self._make_patch_request(request, headers=headers,
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/ceph_manager.py", line 569, in _make_patch_request
    raise ceph_exception.ConnectionErrorException(status_code, detail) from err
ceph_exception_manager.ConnectionErrorException: (400, '{"detail": "Failure resizing namespace using NSID 7 on nqn.2016-06.io.spdk:cnode1: new size must be aligned to MiBs", "code": "22", "component": "nvmeof"}')

Comment 2 Aviv Caro 2024-05-28 16:34:28 UTC
Fixed in GW version 1.2.13.

Comment 3 Krishna Ramaswamy 2024-05-29 15:20:47 UTC
vSphere plugin Datastore expansion is working as expected with latest build: 

[root@cephqe-node2 ~]# nm gw info
CLI's version: 1.2.13
Gateway's version: 1.2.13
Gateway's name: client.nvmeof.rbd.cephqe-node2.djwdil
Gateway's host name: cephqe-node2
Gateway's load balancing group: 1
Gateway's address: 10.70.39.49
Gateway's port: 5500
SPDK version: 24.01.1

Comment 9 errata-xmlrpc 2024-06-13 14:33:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:3925