Bug 1670663 - [Ceph-Ansible][ceph-containers] Add new OSD node to the existing ceph cluster is failing with '--limit osds' option
Summary: [Ceph-Ansible][ceph-containers] Add new OSD node to the existing ceph cluster...
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Ceph-Ansible
Version: 3.2
Hardware: Unspecified
OS: Unspecified
Target Milestone: z2
: 3.2
Assignee: Guillaume Abrioux
QA Contact: Vasishta
Bara Ancincova
: 1670661 (view as bug list)
Depends On:
Blocks: 1629656 1671454
TreeView+ depends on / blocked
Reported: 2019-01-30 05:15 UTC by Shreekar
Modified: 2019-05-06 12:55 UTC (History)
13 users (show)

Fixed In Version: RHEL: ceph-ansible-3.2.9-1.el7cp Ubuntu: ceph-ansible_3.2.9-2redhat1
Doc Type: Bug Fix
Doc Text:
.The `--limit osds` option now works as expected Previously, an attempt to add OSDs by using the `--limit osds` option failed on container setup. The underlying source code has been modified, and adding OSDs with `--limit osds` works as expected.
Clone Of:
Last Closed: 2019-04-30 15:56:46 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 3670 0 None closed Automatic backport of pull request #3668 2020-11-26 17:36:48 UTC
Github com/ceph ceph-ansible pull 3668 0 None None None 2020-06-29 18:57:14 UTC
Red Hat Product Errata RHSA-2019:0911 0 None None None 2019-04-30 15:57:00 UTC

Description Shreekar 2019-01-30 05:15:42 UTC
Description of problem:
Adding a new OSD node to the existing ceph cluster on ceph containers is failing in ceph-ansible's task

Version-Release number of selected component (if applicable):

ceph version 12.2.8-52.el7cp (3af3ca15b68572a357593c261f95038d02f46201) luminous (stable)

How reproducible:


Steps to Reproduce:
1. On the existing ceph cluster, make an entry of a new OSD in inventory file, try to run ansible playbook with '--limit osds' option

Actual results:

Command issued:
cd /usr/share/ceph-ansible ; ANSIBLE_STDOUT_CALLBACK=debug; ansible-playbook -vv -i hosts site.yml --limit osds

2019-01-29 20:22:26,607 - ceph.ceph - INFO - 
TASK [ceph-mds : create filesystem pools] **************************************
task path: /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml:4

2019-01-29 20:22:26,607 - ceph.ceph - INFO - Tuesday 29 January 2019  15:22:25 -0500 (0:00:00.127)       0:13:29.170 ******* 

2019-01-29 20:22:26,858 - ceph.ceph - INFO - failed: [ceph-ansible-1548782152719-node4-osdmds -> ceph-ansible-1548782152719-node11-pool] (item={u'name': u'cephfs_data', u'pgs': u'8'}) => {"changed": false, "cmd": "ceph --cluster ceph osd pool create cephfs_data 8", "item": {"name": "cephfs_data", "pgs": "8"}, "msg": "[Errno 2] No such file or directory", "rc": 2}

2019-01-29 20:22:27,057 - ceph.ceph - INFO - failed: [ceph-ansible-1548782152719-node4-osdmds -> ceph-ansible-1548782152719-node11-pool] (item={u'name': u'cephfs_metadata', u'pgs': u'8'}) => {"changed": false, "cmd": "ceph --cluster ceph osd pool create cephfs_metadata 8", "item": {"name": "cephfs_metadata", "pgs": "8"}, "msg": "[Errno 2] No such file or directory", "rc": 2}

Expected results:
New OSD should get added successfully

Additional info:

Ansible log for adding new OSD:
Entire suite log:

Comment 1 Shreekar 2019-01-30 05:17:07 UTC
*** Bug 1670661 has been marked as a duplicate of this bug. ***

Comment 15 errata-xmlrpc 2019-04-30 15:56:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.