--- Additional comment from Artem Hrechanychenko on 2019-07-18 09:31:19 UTC --- (undercloud) [stack@undercloud-0 ~]$ rpm -qa ceph-ansible ceph-ansible-4.0.0-0.1.rc10.el8cp.noarch "failed: [ceph-2 -> 192.168.24.8] (item=[{'application': 'openstack_gnocchi', 'name': 'metrics', 'pg_num': 32, 'rule_name': 'replicated_rule'}, {'msg': 'non-zero return code', 'cmd': ['podman', 'exec', 'ceph-mon-controller-0', 'ce ph', '--cluster', 'ceph', 'osd', 'pool', 'get', 'metrics', 'size'], 'stdout': '', 'stderr': 'unable to exec into ceph-mon-controller-0: no container with name or ID ceph-mon-controller-0 found: no such container', 'rc': 125, 'start': '201 9-07-17 16:49:47.920625', 'end': '2019-07-17 16:49:47.966148', 'delta': '0:00:00.045523', 'changed': True, 'failed': False, 'invocation': {'module_args': {'_raw_params': 'podman exec ceph-mon-controller-0 ceph --cluster ceph osd pool get metrics size\\n', 'warn': True, '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lin es': ['unable to exec into ceph-mon-controller-0: no container with name or ID ceph-mon-controller-0 found: no such container'], 'failed_when_result': False, 'item': {'application': 'openstack_gnocchi', 'name': 'metrics', 'pg_num': 32, 'r ule_name': 'replicated_rule'}, 'ansible_loop_var': 'item'}]) => changed=false ", " delta: '0:00:00.053923'", " end: '2019-07-17 16:49:49.504360'", " podman exec ceph-mon-controller-0 ceph --cluster ceph osd pool create metrics 32 32 replicated_rule 1", " - application: openstack_gnocchi", " - metrics", " delta: '0:00:00.045523'", " end: '2019-07-17 16:49:47.966148'", " podman exec ceph-mon-controller-0 ceph --cluster ceph osd pool get metrics size", " application: openstack_gnocchi", " name: metrics", " start: '2019-07-17 16:49:47.920625'", " start: '2019-07-17 16:49:49.450437'", [heat-admin@ceph-2 ~]$ sudo podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 77e3cf880b9c 192.168.24.1:8787/rhosp15/openstack-cron:20190711.1 dumb-init --singl... 23 hours ago Up 23 hours ago logrotate_crond 9947cb175aed 192.168.24.1:8787/ceph/rhceph-4.0-rhel8:latest /opt/ceph-contain... 23 hours ago Up 23 hours ago ceph-osd-8 6321d76031e1 192.168.24.1:8787/ceph/rhceph-4.0-rhel8:latest /opt/ceph-contain... 23 hours ago Up 23 hours ago ceph-osd-5 00ddb30cbf84 192.168.24.1:8787/ceph/rhceph-4.0-rhel8:latest /opt/ceph-contain... 23 hours ago Up 23 hours ago ceph-osd-14 b83a4a18df38 192.168.24.1:8787/ceph/rhceph-4.0-rhel8:latest /opt/ceph-contain... 23 hours ago Up 23 hours ago ceph-osd-11 47242e9e34b7 192.168.24.1:8787/ceph/rhceph-4.0-rhel8:latest /opt/ceph-contain... 23 hours ago Up 23 hours ago ceph-osd-1
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. Regards, Giri
Verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0312