Bug 2282560 - vSphere ceph Plugin will not work if we Enable mTLS configuration in ceph cluster
Summary: vSphere ceph Plugin will not work if we Enable mTLS configuration in ceph clu...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Dashboard
Version: 7.1
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: ---
: 7.1z1
Assignee: Nizamudeen
QA Contact: Krishna Ramaswamy
Anjana Suparna Sriram
URL:
Whiteboard:
Depends On:
Blocks: 2298581
TreeView+ depends on / blocked
 
Reported: 2024-05-22 14:48 UTC by Krishna Ramaswamy
Modified: 2024-08-07 11:21 UTC (History)
9 users (show)

Fixed In Version: ceph-18.2.1-223
Doc Type: Known Issue
Doc Text:
.REST APIs expect `root_ca_cert` to be present in the nVMEoF specification for mTLS to work Previously, REST API would expect `root_ca_cert` to be present in the nVMEoF specification for mTLS to work. Due to this, the REST API would fail for nVMEoF requests when `root_ca_cert` is not provided. As a workaround, follow the below steps to configure mTLS and enable nVMEoF requests. . Add the server certificate content into the `root_ca_cert` attribute. + ---- root_ca_cert: | -----BEGIN CERTIFICATE----- MIIFKjCCAxKgAwIBAgIUPwXJd2aunZqKQt1wIRy5KxdGN6UwDQYJKoZIhvcNAQEL BQAwEzERMA8GA1UEAwwIYXV0aG5vZGUwHhcNMjQwNzAyMDg0NjU4WhcNMzQwNjMw MDg0NjU4WjATMREwDwYDVQQDDAhhdXRobm9kZTCCAiIwDQYJKoZIhvcNAQEBBQAD ggIPADCCAgoCggIBANuY5yi2s7NeVgMbqDs4hRCzdvcc2fPil6UUAfcbtptzK9+q -----END CERTIFICATE----- ---- . Set the `enable_auth` attribute to 'true'. + ---- enable_auth: true ---- . Verify the nvmeof-gateway-list. + ---- ceph dashboard nvmeof-gateway-list {"gateways": {"cephqe-node2": {"service_url": "10.70.39.49:5500"}, "cephqe-node3": {"service_url": "10.70.39.50:5500"}, "cephqe-node5": {"service_url": "10.70.39.52:5500"}, "cephqe-node7": {"service_url": "10.70.39.54:5500"}}} ---- . Remove the hostname entry and replace the `nvmeof.rbd` service name with following command. + ---- ceph dashboard nvmeof-gateway-rm cephqe-node2 ceph dashboard nvmeof-gateway-rm cephqe-node3 ceph dashboard nvmeof-gateway-rm cephqe-node5 ceph dashboard nvmeof-gateway-rm cephqe-node7 ---- . Add the nVMEoF gateway URL, along with its service name, to the dashboard. + ---- ceph dashboard nvmeof-gateway-add nvmeof.rbd -i <(echo 10.70.39.49:5500) ---- Result: mTLS is configured and the nVMEoF requests work.
Clone Of:
Environment:
Last Closed: 2024-08-07 11:21:31 UTC
Embargoed:
kramaswa: needinfo+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 57712 0 None open mgr/dashboard: use secure_channel for grpc requests 2024-07-11 07:05:37 UTC
Red Hat Issue Tracker RHCEPH-9074 0 None None None 2024-05-22 14:54:33 UTC
Red Hat Issue Tracker RHCSDASH-1464 0 None None None 2024-05-22 19:40:02 UTC
Red Hat Product Errata RHBA-2024:5080 0 None None None 2024-08-07 11:21:38 UTC

Description Krishna Ramaswamy 2024-05-22 14:48:16 UTC
Description of problem:
vSphere ceph Plugin will not work if we Enable mTLS configuration in ceph cluster 7.1 


Version-Release number of selected component (if applicable): ceph7.1


Error Log:

2024-05-22 14:19:11,090 - ceph_manager.py[line:508] - vsphere-plugin.ceph_manager - INFO : Sending command: https://cephqe-node1.lab.eng.blr.redhat.com:8443/api/nvmeof/gateway
2024-05-22 14:19:11,108 - ceph_manager.py[line:515] - vsphere-plugin.ceph_manager - INFO : Storage system 9932a3a2-1817-11ef-abae-4c5262033c3d response for command https://cephqe-node1.lab.eng.blr.redhat.com:8443/api/nvmeof/gateway
2024-05-22 14:19:11,109 - ceph_manager.py[line:531] - vsphere-plugin.ceph_manager - ERROR : Caught HTTPStatusError with status_code 504 and detail {"detail": "failed to connect to all addresses", "code": "StatusCode.UNAVAILABLE", "component": "nvmeof"}
2024-05-22 14:19:11,109 - ceph_exception_manager.py[line:57] - vsphere-plugin.ceph_exception_manager - ERROR : Status code: 504, detail: {"detail": "failed to connect to all addresses", "code": "StatusCode.UNAVAILABLE", "component": "nvmeof"}
Traceback (most recent call last):
  File "/app/ceph_manager.py", line 520, in _make_get_request
    response.raise_for_status()
  File "/usr/local/lib/python3.11/site-packages/httpx/_models.py", line 758, in raise_for_status
    raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Server error '504 Gateway Timeout' for url 'https://cephqe-node1.lab.eng.blr.redhat.com:8443/api/nvmeof/gateway'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/504

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/app/ceph_exception_manager.py", line 53, in wrapper
    return await fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/endpoints.py", line 58, in make_basic_request
    return await fs.make_basic_request(command)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/ceph_manager.py", line 367, in make_basic_request
    response = await self._make_get_request(request, headers)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/ceph_manager.py", line 534, in _make_get_request
    raise ceph_exception.ConnectionErrorException(status_code, detail) from err
ceph_exception_manager.ConnectionErrorException: (504, '{"detail": "failed to connect to all addresses", "code": "StatusCode.UNAVAILABLE", "component": "nvmeof"}')

Comment 4 Mike Burkhart 2024-05-23 19:43:19 UTC
Non-blocker. mTLS to be fixed in z1, post-GA

Comment 5 Krishna Ramaswamy 2024-05-28 06:55:54 UTC
The mTLS issue will be fixed in 7.1z1. Hence this issue will be in NEW state for tracking.

Comment 13 errata-xmlrpc 2024-08-07 11:21:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.1 security and bug fix update.), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:5080


Note You need to log in before you can comment on or make changes to this bug.