Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1914726

Summary: [cephadm] 5.0 - ISCSI gwcli getting a response 400 bad request
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Harish Munjulur <hmunjulu>
Component: CephadmAssignee: Juan Miguel Olmo <jolmomar>
Status: CLOSED ERRATA QA Contact: Gopi <gpatta>
Severity: high Docs Contact: Karen Norteman <knortema>
Priority: unspecified    
Version: 5.0CC: gpatta, jolmomar, kdreyer, pcuzner, tserlin, vereddy
Target Milestone: ---   
Target Release: 5.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-16.2.0-35.el8cp Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-08-30 08:27:34 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1832787    

Description Harish Munjulur 2021-01-11 02:48:10 UTC
Description of problem: gwcli is sending a request and getting a 400 (bad request) 
2021-01-10 22:05:43,282    ERROR [_internal.py:87:_log()] - ::1 - - [10/Jan/2021 22:05:43] code 400, message Bad request syntax 


Version-Release number of selected component (if applicable):
ceph version 16.0.0-8505.el8cp (5430400a3dcc716a1695de2003d73ea182178926) pacific (dev)

How reproducible:

Steps to Reproduce:
1. Install a bootstrap cluster with cephadm and the dashboard service enabled.
2. # cephadm shell
3. create an iscsi pool and enabled the rbd application
4. label a couple of nodes with iscsi, then
5. # ceph orch apply iscsi --pool iscsi --api_user admin --api_password admin --placement="label:iscsi"
6. check that the service was there, you can look for the config object - # rados -p iscsi ls 
7. fix the auth with - only id needed 
ceph auth caps client.iscsi.iscsi.maint-2.sipkou mon 'profile rbd, allow command "osd blocklist", allow command "config-key get" with "key" prefix "iscsi/"' mgr 'allow *' osd 'allow rwx'
8. run gwcli

Actual results: Should enter into GWCLI interface


Expected results:
[ceph: root@magna022 /]# gwcli
Unable to access the configuration object : REST API failure, code : 500
GatewayError: 

2021-01-10 22:05:43,282    ERROR [_internal.py:87:_log()] - ::1 - - [10/Jan/2021 22:05:43] code 400, message Bad request syntax ("\x16\x03\x01\x02\x00\x01\x00\x01ü\x03\x03\x8aµ\x1fKé\x9f½Á¨O\tCf\x86'È\x88\x9e®u\rå \x14qã\x1c\x0f\x87\x17), Û°FZ/Xʲl\tùN \x1b\x9fJòº\x18Ó¾")
å qã), Û°FZ/Xʲl43,284 ùN òºÓ¾" HTTPStatus.BAD_REQUEST - - ::1 - - [10/Jan/2021 22:05:43] "üµKé  Cf'È
2021-01-10 22:35:41,041    ERROR [_internal.py:87:_log()] - ::1 - - [10/Jan/2021 22:35:41] code 400, message Bad request syntax ('\x16\x03\x01\x02\x00\x01\x00\x01ü\x03\x03·Ð\x06\x15)\x9c¯éH¨Ä\x0cP\x18Sn\x99làÄãí:H4º/\r´\x10I\x05 \x19¥Mï]xÉGËI"ÝÄÉÛæ\x7f¿æû]÷\x06Ú\x8a\x02')
2021-01-10 22:35:41,043     INFO [_internal.py:87:_log()] - ::1 - - [10/Jan/2021 22:35:41] "ü·Ð)¯éH¨Ä
´I ¥Mï]xÉGËI"ÝÄÉÛæ¿æû]÷Ú" HTTPStatus.BAD_REQUEST -                                                   PSnlàÄãí:H4º/

Additional info:

ceph -s

cluster:
    id:     8d835b22-49a4-11eb-9589-002590fc2a2e
    health: HEALTH_WARN
            1 pool(s) do not have an application enabled
 
  services:
    mon: 3 daemons, quorum magna021.ceph.redhat.com,magna023,magna022 (age 12d)
    mgr: magna021.ceph.redhat.com.wrfruo(active, since 12d), standbys: magna023.pvhlbl
    mds: cephfs:1 {0=cephfs.magna026.isexai=up:active} 2 up:standby
    osd: 35 osds: 35 up (since 12d), 35 in (since 12d)
 
  data:
    pools:   7 pools, 193 pgs
    objects: 375 objects, 1.3 GiB
    usage:   8.4 GiB used, 35 TiB / 35 TiB avail
    pgs:     193 active+clean
 
  io:
    client:   2.7 KiB/s rd, 2 op/s rd, 0 op/s wr


Ceph orch ls --->>


NAME                     RUNNING  REFRESHED  AGE  PLACEMENT                                                                           IMAGE NAME                                                                                                      IMAGE ID      
alertmanager                 0/1  -          -    count:1                                                                             <unknown>                                                                                                       <unknown>     
crash                      13/13  5m ago     12d  *                                                                                   registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-99351-20201222030522  b1aab5c54f14  
grafana                      0/1  -          -    count:1                                                                             <unknown>                                                                                                       <unknown>     
iscsi.iscsi                  2/2  5m ago     3d   label:iscsi                                                                         registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-99351-20201222030522  b1aab5c54f14  
mds.cephfs                   3/3  5m ago     12d  magna024.ceph.redhat.com;magna025.ceph.redhat.com;magna026.ceph.redhat.com;count:3  registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-99351-20201222030522  b1aab5c54f14  
mgr                          2/2  5m ago     12d  count:2                                                                             registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-99351-20201222030522  b1aab5c54f14  
mon                          3/3  5m ago     12d  magna021.ceph.redhat.com;magna022.ceph.redhat.com;magna023.ceph.redhat.com          registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-99351-20201222030522  b1aab5c54f14  
nfs.ganesha-nfs-ganesha      2/2  5m ago     12d  magna025.ceph.redhat.com;magna026.ceph.redhat.com;count:2                           registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-99351-20201222030522  b1aab5c54f14  
node-exporter              13/13  5m ago     12d  *                                                                                   registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5                                                 1dc3f4999f10  
osd.None                    35/0  5m ago     -    <unmanaged>                                                                         registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-99351-20201222030522  b1aab5c54f14  
prometheus                   0/1  -          -    count:1                                                                             <unknown>                                                                                                       <unknown>     


podman ps

[root@magna022 ~]# podman ps
CONTAINER ID  IMAGE                                                                                                           COMMAND               CREATED         STATUS             PORTS   NAMES
04c5fed33e8f  registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-99351-20201222030522  -n mon.magna022 -...  12 days ago     Up 12 days ago             ceph-8d835b22-49a4-11eb-9589-002590fc2a2e-mon.magna022
17c9ea7c4835  registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-99351-20201222030522                        2 hours ago     Up 2 hours ago             ceph-8d835b22-49a4-11eb-9589-002590fc2a2e-iscsi.iscsi.magna022.cehzcm
2efa40317544  registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-99351-20201222030522                        2 hours ago     Up 2 hours ago             ceph-8d835b22-49a4-11eb-9589-002590fc2a2e-iscsi.iscsi.magna022.cehzcm-tcmu
63949ab33a09  registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-99351-20201222030522                        18 minutes ago  Up 18 minutes ago          zen_leakey
73d6c4315c09  registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-99351-20201222030522                        12 days ago     Up 12 days ago             focused_hellman
857afaa2d738  registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-99351-20201222030522                        3 minutes ago   Up 3 minutes ago           great_cerf
a28d16e522df  registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-99351-20201222030522  -n client.crash.m...  12 days ago     Up 12 days ago             ceph-8d835b22-49a4-11eb-9589-002590fc2a2e-crash.magna022
ca9567796816  registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-99351-20201222030522                        3 days ago      Up 3 days ago              affectionate_vaughan
e38c51012bb4  registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5                                                 --no-collector.ti...  12 days ago     Up 12 days ago             ceph-8d835b22-49a4-11eb-9589-002590fc2a2e-node-exporter.magna022

Comment 5 Yaniv Kaul 2021-04-18 15:30:28 UTC
Missing devel-ack

Comment 11 Harish Munjulur 2021-05-20 06:27:29 UTC
FAILED on ceph version 16.2.0-38.el8cp (54fb2271e5015808565bc05b6877deb6bf3f5da9) pacific (stable)- Still not able to enter into GWCLI inside cephadm shell. 

[ceph: root@ceph-hmunjulu-1621484613681-node1-mon-mgr-installer-node-export ~]# gwcli -d
Adding ceph cluster 'ceph' to the UI
Fetching ceph osd information
Querying ceph for state information
REST API failure, code : 500
Unable to access the configuration object
Traceback (most recent call last):
  File "/usr/bin/gwcli", line 194, in <module>
    main()
  File "/usr/bin/gwcli", line 108, in main
    "({})".format(settings.config.api_endpoint))
AttributeError: 'Settings' object has no attribute 'api_endpoint'


More information:
iscsi.yaml contents

service_type: iscsi
service_id: iscsi
placement:
  hosts:
  - ceph-hmunjulu-1621484613681-node1-mon-mgr-installer-node-export
  - ceph-hmunjulu-1621484613681-node2-mon-mds-node-exporter-alertma
spec:
  pool: iscsi_pool
  trusted_ip_list: "10.0.210.226, 10.0.210.1, 2620:52:0:d0:f816:3eff:fe67:ab83, fe80::f816:3eff:fe50:e6e4"


[ceph: root@ceph-hmunjulu-1621484613681-node1-mon-mgr-installer-node-export ~]# ceph versions
{
    "mon": {
        "ceph version 16.2.0-38.el8cp (54fb2271e5015808565bc05b6877deb6bf3f5da9) pacific (stable)": 3
    },
    "mgr": {
        "ceph version 16.2.0-38.el8cp (54fb2271e5015808565bc05b6877deb6bf3f5da9) pacific (stable)": 1
    },
    "osd": {
        "ceph version 16.2.0-38.el8cp (54fb2271e5015808565bc05b6877deb6bf3f5da9) pacific (stable)": 12
    },
    "mds": {
        "ceph version 16.2.0-38.el8cp (54fb2271e5015808565bc05b6877deb6bf3f5da9) pacific (stable)": 2
    },
    "overall": {
        "ceph version 16.2.0-38.el8cp (54fb2271e5015808565bc05b6877deb6bf3f5da9) pacific (stable)": 18
    }
}


[ceph: root@ceph-hmunjulu-1621484613681-node1-mon-mgr-installer-node-export /]# ceph orch ls
NAME                       RUNNING  REFRESHED  AGE   PLACEMENT                                                                                                                        
iscsi.iscsi                    2/2  63m ago    42m   ceph-hmunjulu-1621484613681-node1-mon-mgr-installer-node-export;ceph-hmunjulu-1621484613681-node2-mon-mds-node-exporter-alertma  
mds.cephfs                     2/2  65m ago    101m  ceph-hmunjulu-1621484613681-node4-osd-node-exporter-crash;ceph-hmunjulu-1621484613681-node5-osd-node-exporter-crash              
mgr                            1/1  63m ago    104m  label:mgr                                                                                                                        
mon                            3/3  69m ago    103m  label:mon                                                                                                                        
osd.all-available-devices    12/20  66m ago    101m  *

Comment 12 Juan Miguel Olmo 2021-05-20 07:10:09 UTC
GWCLI only works inside the iscsi containers:


- Go to one of the iscsi nodes, for example "ceph-hmunjulu-1621484613681-node1-mon-mgr-installer-node-export"

# ssh ceph-hmunjulu-1621484613681-node1-mon-mgr-installer-node-export

- Get the information of the iscsi container in the running in the host:

# podman ps -a
CONTAINER ID  IMAGE                                           COMMAND               CREATED       STATUS           PORTS   NAMES
4b5ffb814409  docker.io/ceph/daemon-base:latest-master-devel                        2 hours ago   Up 2 hours ago           ceph-f838eb7a-597c-11eb-b0a9-525400e2439c-iscsi.iscsi.cephLab2-node-01.anaahg


- Enter in the iscsi container using the iscsi container id:

# podman exec -it 4b5ffb814409 /bin/bash


- Execute gwcli

# gwcli

Comment 13 Gopi 2021-05-31 06:30:06 UTC
Working as per comment#12. Moving to verified state.

Comment 14 Gopi 2021-05-31 06:58:17 UTC
QA steps.

[root@magnaxyz ~]# podman ps -a
CONTAINER ID  IMAGE                                                                                                                  COMMAND               CREATED      STATUS          PORTS   NAMES
d9c2d1bb066d  registry.redhat.io/rhceph-beta/rhceph-5-rhel8@sha256:12916c77358a4b274462899f6995e3dba175abf899ccb27f65e03b5078722b95                        6 days ago   Up 6 days ago           ceph-89a40cc4-b95a-11eb-875d-002590fc2538-iscsi.iscsi.magnaxyz.dwevvm-tcmu
dfcaea8af443  registry.redhat.io/rhceph-beta/rhceph-5-rhel8@sha256:12916c77358a4b274462899f6995e3dba175abf899ccb27f65e03b5078722b95                        6 days ago   Up 6 days ago           ceph-89a40cc4-b95a-11eb-875d-002590fc2538-iscsi.iscsi.magnaxyz.dwevvm
e65f6bcf47ad  registry.redhat.io/rhceph-beta/rhceph-5-rhel8@sha256:12916c77358a4b274462899f6995e3dba175abf899ccb27f65e03b5078722b95                        6 days ago   Up 6 days ago           modest_jemison
67aa566b6037  registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.5                                                         --web.listen-addr...  10 days ago  Up 10 days ago          ceph-89a40cc4-b95a-11eb-875d-002590fc2538-alertmanager.magnaxyz
7eb967e70546  registry.redhat.io/openshift4/ose-prometheus:v4.6                                                                      --config.file=/et...  10 days ago  Up 10 days ago          ceph-89a40cc4-b95a-11eb-875d-002590fc2538-prometheus.magnaxyz
c4e7ccec1a91  registry.redhat.io/rhceph-beta/rhceph-5-dashboard-rhel8:latest                                                                               10 days ago  Up 10 days ago          ceph-89a40cc4-b95a-11eb-875d-002590fc2538-grafana.magnaxyz
e90c2dce5bb7  registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5                                                        --no-collector.ti...  10 days ago  Up 10 days ago          ceph-89a40cc4-b95a-11eb-875d-002590fc2538-node-exporter.magnaxyz
996672d4735d  registry.redhat.io/rhceph-beta/rhceph-5-rhel8@sha256:12916c77358a4b274462899f6995e3dba175abf899ccb27f65e03b5078722b95  -n client.crash.m...  10 days ago  Up 10 days ago          ceph-89a40cc4-b95a-11eb-875d-002590fc2538-crash.magnaxyz
b01b1e73b645  registry.redhat.io/rhceph-beta/rhceph-5-rhel8:latest                                                                   -n mgr.magnaxyz.h...  10 days ago  Up 10 days ago          ceph-89a40cc4-b95a-11eb-875d-002590fc2538-mgr.magnaxyz.honowy
727fd1a287e5  registry.redhat.io/rhceph-beta/rhceph-5-rhel8:latest                                                                   -n mon.magnaxyz -...  10 days ago  Up 10 days ago          ceph-89a40cc4-b95a-11eb-875d-002590fc2538-mon.magnaxyz
[root@magnaxyz ~]# podman exec -it  d9c2d1bb066d /bin/bash
[root@magnaxyz /]# gwcli
/iscsi-targets> ls
o- iscsi-targets ................................................................................. [DiscoveryAuth: None, Targets: 1]
  o- iqn.2003-01.com.redhat.iscsi-gw:ceph-igw ............................................................ [Auth: None, Gateways: 2]
    o- disks ............................................................................................................ [Disks: 1]
    | o- iscsi_pool/disk_1 ............................................................................... [Owner: magnaxyz, Lun: 0]
    o- gateways .............................................................................................. [Up: 2/2, Portals: 2]
    | o- magnaxyz ................................................................................................ [10.8.128.7 (UP)]
    | o- magnaabc ............................................................................................... [10.8.128.10 (UP)]
    o- host-groups .................................................................................................... [Groups : 0]
    o- hosts ......................................................................................... [Auth: ACL_ENABLED, Hosts: 1]
      o- iqn.1994-05.com.redhat:rh7-client .................................................. [LOGGED-IN, Auth: CHAP, Disks: 1(10G)]
        o- lun 0 ......................................................................... [iscsi_pool/disk_1(10G), Owner: magnaxyz]
/iscsi-targets>

Comment 16 errata-xmlrpc 2021-08-30 08:27:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294