Ilya please take a look at this.
Working as expected. Hence moving this bug to verified state. systemctl status rbd-target-api ● rbd-target-api.service - RBD Target API Service Loaded: loaded (/etc/systemd/system/rbd-target-api.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2021-06-04 01:45:21 EDT; 1h 1min ago Main PID: 95981 (conmon) Tasks: 2 (limit: 23976) Memory: 2.6M CGroup: /system.slice/rbd-target-api.service └─95981 /usr/bin/conmon --api-version 1 -c 27de3373c850b1cdb293f67d92cb780088ceb91a721b5b6a20d90fea146496f0 -u 27de3373c850b1cdb293f67d92cb780088ceb91a721b5b6a20> Jun 04 01:45:21 ceph-gopi-1622725159152-node3-osd-mon-mgr systemd[1]: Starting RBD Target API Service... Jun 04 01:45:21 ceph-gopi-1622725159152-node3-osd-mon-mgr podman[95888]: Error: no container with name or ID rbd-target-api found: no such container Jun 04 01:45:21 ceph-gopi-1622725159152-node3-osd-mon-mgr podman[95918]: Error: no container with name or ID rbd-target-api found: no such container Jun 04 01:45:21 ceph-gopi-1622725159152-node3-osd-mon-mgr podman[95949]: 27de3373c850b1cdb293f67d92cb780088ceb91a721b5b6a20d90fea146496f0 Jun 04 01:45:21 ceph-gopi-1622725159152-node3-osd-mon-mgr systemd[1]: Started RBD Target API Service. Jun 04 01:45:22 ceph-gopi-1622725159152-node3-osd-mon-mgr conmon[95981]: 2021-06-04 01:45:22 /opt/ceph-container/bin/entrypoint.sh: static: does not generate config Jun 04 01:45:22 ceph-gopi-1622725159152-node3-osd-mon-mgr conmon[95981]: HEALTH_OK Jun 04 01:45:22 ceph-gopi-1622725159152-node3-osd-mon-mgr conmon[95981]: 2021-06-04 01:45:22 /opt/ceph-container/bin/entrypoint.sh: SUCCESS Jun 04 01:45:22 ceph-gopi-1622725159152-node3-osd-mon-mgr conmon[95981]: exec: PID 82: spawning /usr/bin/rbd-target-api podman exec -it ad132e40f43d sh sh-4.4# gwcli -d ls o- / ......................................................................................................................... [...] o- cluster ......................................................................................................... [Clusters: 1] | o- ceph ............................................................................................................ [HEALTH_OK] | o- pools .......................................................................................................... [Pools: 3] | | o- cephfs_data ........................................................... [(x3), Commit: 0.00Y/56596260K (0%), Used: 0.00Y] | | o- cephfs_metadata ....................................................... [(x3), Commit: 0.00Y/56596260K (0%), Used: 1536K] | | o- rbd .................................................................... [(x3), Commit: 0.00Y/56596260K (0%), Used: 192K] | o- topology ................................................................................................ [OSDs: 9,MONs: 3] o- disks ....................................................................................................... [0.00Y, Disks: 0] o- iscsi-targets ............................................................................... [DiscoveryAuth: None, Targets: 0] sh-4.4#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2445