Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1732157

Summary: ceph-osd: OpenStack pool creation fails with "unable to exec into ceph-mon-controller-0: no container with name or ID ceph-mon-controller-0 found: no such container"
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Dimitri Savineau <dsavinea>
Component: Ceph-AnsibleAssignee: Dimitri Savineau <dsavinea>
Status: CLOSED ERRATA QA Contact: Vasishta <vashastr>
Severity: medium Docs Contact:
Priority: urgent    
Version: 4.0CC: aschoen, ceph-eng-bugs, dsavinea, emacchi, gcharot, gfidente, gmeno, johfulto, nthomas, pgrist, tserlin, yrabl
Target Milestone: rcKeywords: Reopened, Triaged
Target Release: 4.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-ansible-4.0.5-1.el8cp.noarch.rpm Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1722066 Environment:
Last Closed: 2020-01-31 12:46:52 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1642481    

Description Dimitri Savineau 2019-07-22 19:57:33 UTC
--- Additional comment from Artem Hrechanychenko on 2019-07-18 09:31:19 UTC ---


(undercloud) [stack@undercloud-0 ~]$ rpm -qa ceph-ansible
ceph-ansible-4.0.0-0.1.rc10.el8cp.noarch


 "failed: [ceph-2 -> 192.168.24.8] (item=[{'application': 'openstack_gnocchi', 'name': 'metrics', 'pg_num': 32, 'rule_name': 'replicated_rule'}, {'msg': 'non-zero return code', 'cmd': ['podman', 'exec', 'ceph-mon-controller-0', 'ce
ph', '--cluster', 'ceph', 'osd', 'pool', 'get', 'metrics', 'size'], 'stdout': '', 'stderr': 'unable to exec into ceph-mon-controller-0: no container with name or ID ceph-mon-controller-0 found: no such container', 'rc': 125, 'start': '201
9-07-17 16:49:47.920625', 'end': '2019-07-17 16:49:47.966148', 'delta': '0:00:00.045523', 'changed': True, 'failed': False, 'invocation': {'module_args': {'_raw_params': 'podman exec ceph-mon-controller-0 ceph --cluster ceph osd pool get
metrics size\\n', 'warn': True, '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lin
es': ['unable to exec into ceph-mon-controller-0: no container with name or ID ceph-mon-controller-0 found: no such container'], 'failed_when_result': False, 'item': {'application': 'openstack_gnocchi', 'name': 'metrics', 'pg_num': 32, 'r
ule_name': 'replicated_rule'}, 'ansible_loop_var': 'item'}]) => changed=false ",
"  delta: '0:00:00.053923'",
        "  end: '2019-07-17 16:49:49.504360'",
        "        podman exec ceph-mon-controller-0 ceph --cluster ceph osd pool create metrics 32 32 replicated_rule 1",
        "  - application: openstack_gnocchi",
        "    - metrics",
        "    delta: '0:00:00.045523'",
        "    end: '2019-07-17 16:49:47.966148'",
        "          podman exec ceph-mon-controller-0 ceph --cluster ceph osd pool get metrics size",
        "      application: openstack_gnocchi",
        "      name: metrics",
        "    start: '2019-07-17 16:49:47.920625'",
        "  start: '2019-07-17 16:49:49.450437'",

[heat-admin@ceph-2 ~]$ sudo podman ps -a
CONTAINER ID  IMAGE                                                COMMAND               CREATED       STATUS           PORTS  NAMES
77e3cf880b9c  192.168.24.1:8787/rhosp15/openstack-cron:20190711.1  dumb-init --singl...  23 hours ago  Up 23 hours ago         logrotate_crond
9947cb175aed  192.168.24.1:8787/ceph/rhceph-4.0-rhel8:latest       /opt/ceph-contain...  23 hours ago  Up 23 hours ago         ceph-osd-8
6321d76031e1  192.168.24.1:8787/ceph/rhceph-4.0-rhel8:latest       /opt/ceph-contain...  23 hours ago  Up 23 hours ago         ceph-osd-5
00ddb30cbf84  192.168.24.1:8787/ceph/rhceph-4.0-rhel8:latest       /opt/ceph-contain...  23 hours ago  Up 23 hours ago         ceph-osd-14
b83a4a18df38  192.168.24.1:8787/ceph/rhceph-4.0-rhel8:latest       /opt/ceph-contain...  23 hours ago  Up 23 hours ago         ceph-osd-11
47242e9e34b7  192.168.24.1:8787/ceph/rhceph-4.0-rhel8:latest       /opt/ceph-contain...  23 hours ago  Up 23 hours ago         ceph-osd-1

Comment 1 Giridhar Ramaraju 2019-08-05 13:12:17 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 2 Giridhar Ramaraju 2019-08-05 13:13:04 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 10 Yogev Rabl 2020-01-21 18:57:09 UTC
Verified

Comment 12 errata-xmlrpc 2020-01-31 12:46:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0312