Bug 1670663

Summary: [Ceph-Ansible][ceph-containers] Add new OSD node to the existing ceph cluster is failing with '--limit osds' option
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Persona non grata <nobody+410372>
Component: Ceph-AnsibleAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED ERRATA QA Contact: Vasishta <vashastr>
Severity: medium Docs Contact: Bara Ancincova <bancinco>
Priority: high    
Version: 3.2CC: anharris, aschoen, ceph-eng-bugs, ceph-qe-bugs, gabrioux, gmeno, hnallurv, nobody+410372, nthomas, sankarshan, tchandra, tserlin, ymane
Target Milestone: z2Keywords: Automation
Target Release: 3.2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: RHEL: ceph-ansible-3.2.9-1.el7cp Ubuntu: ceph-ansible_3.2.9-2redhat1 Doc Type: Bug Fix
Doc Text:
.The `--limit osds` option now works as expected Previously, an attempt to add OSDs by using the `--limit osds` option failed on container setup. The underlying source code has been modified, and adding OSDs with `--limit osds` works as expected.
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-04-30 15:56:46 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1629656, 1671454    

Description Persona non grata 2019-01-30 05:15:42 UTC
Description of problem:
Adding a new OSD node to the existing ceph cluster on ceph containers is failing in ceph-ansible's task

Version-Release number of selected component (if applicable):
ceph-ansible-3.2.4-1.el7cp.noarch
2

ceph version 12.2.8-52.el7cp (3af3ca15b68572a357593c261f95038d02f46201) luminous (stable)

How reproducible:

2/2

Steps to Reproduce:
1. On the existing ceph cluster, make an entry of a new OSD in inventory file, try to run ansible playbook with '--limit osds' option

Actual results:

Command issued:
cd /usr/share/ceph-ansible ; ANSIBLE_STDOUT_CALLBACK=debug; ansible-playbook -vv -i hosts site.yml --limit osds

2019-01-29 20:22:26,607 - ceph.ceph - INFO - 
TASK [ceph-mds : create filesystem pools] **************************************
task path: /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml:4

2019-01-29 20:22:26,607 - ceph.ceph - INFO - Tuesday 29 January 2019  15:22:25 -0500 (0:00:00.127)       0:13:29.170 ******* 

2019-01-29 20:22:26,858 - ceph.ceph - INFO - failed: [ceph-ansible-1548782152719-node4-osdmds -> ceph-ansible-1548782152719-node11-pool] (item={u'name': u'cephfs_data', u'pgs': u'8'}) => {"changed": false, "cmd": "ceph --cluster ceph osd pool create cephfs_data 8", "item": {"name": "cephfs_data", "pgs": "8"}, "msg": "[Errno 2] No such file or directory", "rc": 2}

2019-01-29 20:22:27,057 - ceph.ceph - INFO - failed: [ceph-ansible-1548782152719-node4-osdmds -> ceph-ansible-1548782152719-node11-pool] (item={u'name': u'cephfs_metadata', u'pgs': u'8'}) => {"changed": false, "cmd": "ceph --cluster ceph osd pool create cephfs_metadata 8", "item": {"name": "cephfs_metadata", "pgs": "8"}, "msg": "[Errno 2] No such file or directory", "rc": 2}

2
Expected results:
New OSD should get added successfully


Additional info:

Ansible log for adding new OSD:
http://magna002.ceph.redhat.com/cephci-jenkins/cephci-run-1548782152719/config_roll_over_1.log 
Entire suite log:
http://magna002.ceph.redhat.com/cephci-jenkins/cephci-run-1548782152719/

Comment 1 Persona non grata 2019-01-30 05:17:07 UTC
*** Bug 1670661 has been marked as a duplicate of this bug. ***

Comment 15 errata-xmlrpc 2019-04-30 15:56:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:0911