When specifying the size in the openstack_pools variable [1] (as triggered by OSPd director) the command that is run creates the OSD without the desired size [2]. I think the issue is in the following line of code which is missing {{ item.size }}. https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-mon/tasks/openstack_config.yml#L3 [1] openstack_pools: - {name: manila_data, pg_num: 128, pgp_num: 128, rule_name: '', size: 3000} - {name: manila_metadata, pg_num: 64, pgp_num: 64, rule_name: '', size: 1000} - {name: metrics, pg_num: 256, pgp_num: 256, rule_name: '', size: 3000} - {name: volumes, pg_num: 1024, pgp_num: 1024, rule_name: '', size: 16000} - {name: images, pg_num: 128, pgp_num: 128, rule_name: '', size: 3000} - {name: backups, pg_num: 512, pgp_num: 512, rule_name: '', size: 8000} - {name: vms, pg_num: 512, pgp_num: 512, rule_name: '', size: 6000} [2] 2018-02-02 13:11:48,841 p=11458 u=mistral | ok: [192.168.1.21] => (item={u'rule_name': u'', u'size': 3000, u'pg_num': 256, u'name': u'metrics', u'pgp_num': 256}) => { "changed": false, "cmd": [ "docker", "exec", "ceph-mon-overcloud-controller-1", "ceph", "--cluster", "ceph", "osd", "pool", "create", "metrics", "256" ], "delta": "0:00:00.935277", "end": "2018-02-02 18:11:48.817739", "failed_when_result": false, "invocation": { "module_args": { "_raw_params": "docker exec ceph-mon-overcloud-controller-1 ceph --cluster ceph osd pool create metrics 256 ", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": { "name": "metrics", "pg_num": 256, "pgp_num": 256, "rule_name": "", "size": 3000 }, "rc": 0, "start": "2018-02-02 18:11:47.882462", "stderr": "pool 'metrics' created", "stderr_lines": [ "pool 'metrics' created" ], "stdout": "", "stdout_lines": [] }
This was never expected to work. The size key was added by OSPd, this is more an enhancement than a bug fix. Anyway, I think what you're calling size is "expected-num-objects". This can be easily exposed but only useful when running filestore.
The code in ceph-ansible master seems to have fixed this, we probably just need to rebase the beta-3.1 branch and make a new build to include the fix
ceph-ansible master got a 3.1.0beta4 tag yesterday ceph-ansible-3.1.0-0.beta4.1.el7 is available in storage7-ceph-luminous-candidate (http://cbs.centos.org/koji/buildinfo?buildID=22342) ceph-ansible-3.1.0-0.1.beta4.el7cp is available in ceph-3.1-rhel-7-candidate John and Giulio, would you please confirm this version fixes this bug?
A deployment which uses ceph-ansible-3.1.0.0-0.beta4.1.el7.noarch and the following THT: parameter_defaults: CephPools: - {"name": volumes, "pg_num": 32, "pgp_num": 32, "rule_name": 'replicated_rule', "erasure_profile": '', "size": 1000} - {"name": metrics, "pg_num": 32, "pgp_num": 32} has with the following message in the /var/log/mistral/ceph-install-workflow.log 2018-03-23 22:37:17,591 p=2191 u=mistral | ok: [192.168.24.6] => (item={u'name': u'volumes', u'rule_name': u'replicated_rule', u'pg_num': 32, u'pgp_num': 32, u'erasure_profile': u'', u'size': 1000}) and the deployment succeeds. I will submit a follow up PR just to rename size with expected-num-objects.
will be in 3.1
Verified with ceph-ansible-3.1.0-0.1.beta8.el7cp.noarch Followed comment#11 and found message in /var/log/mistral/ceph-install-workflow.log Hence moving to verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:2819