Bug 2298626

Summary: Deployment of NVMeOF GW Service is also Failing without mTLS in the new RC build
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Krishna Ramaswamy <kramaswa>
Component: NVMeOFAssignee: Aviv Caro <acaro>
Status: CLOSED ERRATA QA Contact: Krishna Ramaswamy <kramaswa>
Severity: urgent Docs Contact: ceph-doc-bot <ceph-doc-bugzilla>
Priority: urgent    
Version: 7.1CC: acaro, cephqe-warriors, sunnagar, tserlin
Target Milestone: ---   
Target Release: 7.1z1   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-18.2.1-226.el9cp Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-08-07 11:20:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Krishna Ramaswamy 2024-07-18 08:23:58 UTC
Description of problem:
Deployment of NVMeOF GW Service is also Failing without mTLS in the new RC build

nvmeof-rhel9:1.2.16-3


[root@cephqe-node1 ~]# ceph versions
{
    "mon": {
        "ceph version 18.2.1-224.el9cp (e65d95a3893a13895a9089eedaa7d34a37f1003b) reef (stable)": 5
    },
    "mgr": {
        "ceph version 18.2.1-224.el9cp (e65d95a3893a13895a9089eedaa7d34a37f1003b) reef (stable)": 2
    },
    "osd": {
        "ceph version 18.2.1-224.el9cp (e65d95a3893a13895a9089eedaa7d34a37f1003b) reef (stable)": 19
    },
    "overall": {
        "ceph version 18.2.1-224.el9cp (e65d95a3893a13895a9089eedaa7d34a37f1003b) reef (stable)": 26
    }
}


Error:

[root@cephqe-node2 nvmeof.rbd.cephqe-node2.jgzjmj]# podman run --rm cp.stg.icr.io/cp/ibm-ceph/nvmeof-cli-rhel9:1.2.16-3 --server-address 10.70.39.49 --server-port 5500 gw info
Failure getting gateway's information:
<_InactiveRpcError of RPC that terminated with:
        status = StatusCode.UNAVAILABLE
        details = "failed to connect to all addresses; last error: UNKNOWN: ipv4:10.70.39.49:5500: Failed to connect to remote host: Connection refused"
        debug_error_string = "UNKNOWN:failed to connect to all addresses; last error: UNKNOWN: ipv4:10.70.39.49:5500: Failed to connect to remote host: Connection refused {grpc_status:14, created_time:"2024-07-18T08:14:35.10719355+00:00"}"
>

Comment 1 Krishna Ramaswamy 2024-07-18 08:34:15 UTC
[root@cephqe-node1 ~]# ceph orch ls
NAME                       PORTS             RUNNING  REFRESHED  AGE  PLACEMENT                                            
alertmanager               ?:9093,9094           1/1  102s ago   2w   count:1                                              
ceph-exporter                                    5/5  2m ago     2w   *                                                    
crash                                            5/5  2m ago     2w   *                                                    
grafana                    ?:3000                1/1  102s ago   2w   count:1                                              
mgr                                              2/2  2m ago     2w   count:2                                              
mon                                              5/5  2m ago     2w   count:5                                              
node-exporter              ?:9100                5/5  2m ago     2w   *                                                    
node-proxy                                       0/0  -          2w   *                                                    
nvmeof.rbd                 ?:4420,5500,8009      4/4  103s ago   65m  cephqe-node2;cephqe-node3;cephqe-node5;cephqe-node7  
osd.all-available-devices                         19  2m ago     2w   *                                                    
prometheus                 ?:9095                1/1  102s ago   2w   count:1                                              



[root@cephqe-node1 ~]#  ceph nvme-gw show rbd ''
{
    "epoch": 388,
    "pool": "rbd",
    "group": "",
    "num gws": 8,
    "Anagrp list": "[ 1 2 3 4 5 6 7 8 ]"
}
{
    "gw-id": "client.nvmeof.rbd.cephqe-node2.nxqyzv",
    "anagrp-id": 1,
    "performed-full-startup": 0,
    "Availability": "UNAVAILABLE",
    "ana states": " 1: STANDBY , 2: STANDBY , 3: STANDBY , 4: STANDBY , 5: STANDBY , 6: STANDBY , 7: STANDBY , 8: STANDBY ,"
}
{
    "gw-id": "client.nvmeof.rbd.cephqe-node3.ttznty",
    "anagrp-id": 2,
    "performed-full-startup": 0,
    "Availability": "UNAVAILABLE",
    "ana states": " 1: STANDBY , 2: STANDBY , 3: STANDBY , 4: STANDBY , 5: STANDBY , 6: STANDBY , 7: STANDBY , 8: STANDBY ,"
}
{
    "gw-id": "client.nvmeof.rbd.cephqe-node5.kjplpx",
    "anagrp-id": 3,
    "performed-full-startup": 0,
    "Availability": "UNAVAILABLE",
    "ana states": " 1: STANDBY , 2: STANDBY , 3: STANDBY , 4: STANDBY , 5: STANDBY , 6: STANDBY , 7: STANDBY , 8: STANDBY ,"
}
{
    "gw-id": "client.nvmeof.rbd.cephqe-node7.kkkboq",
    "anagrp-id": 4,
    "performed-full-startup": 0,
    "Availability": "UNAVAILABLE",
    "ana states": " 1: STANDBY , 2: STANDBY , 3: STANDBY , 4: STANDBY , 5: STANDBY , 6: STANDBY , 7: STANDBY , 8: STANDBY ,"
}
{
    "gw-id": "rbd.cephqe-node2.jgzjmj",
    "anagrp-id": 5,
    "performed-full-startup": 1,
    "Availability": "CREATED",
    "ana states": " 1: STANDBY , 2: STANDBY , 3: STANDBY , 4: STANDBY , 5: STANDBY , 6: STANDBY , 7: STANDBY , 8: STANDBY ,"
}
{
    "gw-id": "rbd.cephqe-node3.konuka",
    "anagrp-id": 6,
    "performed-full-startup": 1,
    "Availability": "CREATED",
    "ana states": " 1: STANDBY , 2: STANDBY , 3: STANDBY , 4: STANDBY , 5: STANDBY , 6: STANDBY , 7: STANDBY , 8: STANDBY ,"
}
{
    "gw-id": "rbd.cephqe-node5.obwror",
    "anagrp-id": 7,
    "performed-full-startup": 1,
    "Availability": "CREATED",
    "ana states": " 1: STANDBY , 2: STANDBY , 3: STANDBY , 4: STANDBY , 5: STANDBY , 6: STANDBY , 7: STANDBY , 8: STANDBY ,"
}
{
    "gw-id": "rbd.cephqe-node7.cpjuno",
    "anagrp-id": 8,
    "performed-full-startup": 1,
    "Availability": "CREATED",
    "ana states": " 1: STANDBY , 2: STANDBY , 3: STANDBY , 4: STANDBY , 5: STANDBY , 6: STANDBY , 7: STANDBY , 8: STANDBY ,"
}
[root@cephqe-node1 ~]#

Comment 11 errata-xmlrpc 2024-08-07 11:20:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.1 security and bug fix update.), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:5080