Bug 1541520 - ceph-ansible w/ containers in opentack mode creates pools but ignores specified size
Summary: ceph-ansible w/ containers in opentack mode creates pools but ignores specifi...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 3.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: 3.1
Assignee: Sébastien Han
QA Contact: Rachana Patel
URL:
Whiteboard:
Depends On:
Blocks: 1548353
TreeView+ depends on / blocked
 
Reported: 2018-02-02 19:02 UTC by John Fulton
Modified: 2019-10-24 05:38 UTC (History)
13 users (show)

Fixed In Version: RHEL: ceph-ansible-3.1.0-0.1.beta6.el7cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-26 18:18:23 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 2432 0 None None None 2018-03-05 08:57:45 UTC
Github ceph ceph-ansible pull 2471 0 None None None 2018-03-25 19:49:34 UTC
Red Hat Product Errata RHBA-2018:2819 0 None None None 2018-09-26 18:19:39 UTC

Description John Fulton 2018-02-02 19:02:40 UTC
When specifying the size in the openstack_pools variable [1] (as triggered by OSPd director) the command that is run creates the OSD without the desired size [2]. I think the issue is in the following line of code which is missing {{ item.size }}. 

https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-mon/tasks/openstack_config.yml#L3

[1] 
    openstack_pools:
    - {name: manila_data, pg_num: 128, pgp_num: 128, rule_name: '', size: 3000}
    - {name: manila_metadata, pg_num: 64, pgp_num: 64, rule_name: '', size: 1000}
    - {name: metrics, pg_num: 256, pgp_num: 256, rule_name: '', size: 3000}
    - {name: volumes, pg_num: 1024, pgp_num: 1024, rule_name: '', size: 16000}
    - {name: images, pg_num: 128, pgp_num: 128, rule_name: '', size: 3000}
    - {name: backups, pg_num: 512, pgp_num: 512, rule_name: '', size: 8000}
    - {name: vms, pg_num: 512, pgp_num: 512, rule_name: '', size: 6000}

[2] 
2018-02-02 13:11:48,841 p=11458 u=mistral |  ok: [192.168.1.21] => (item={u'rule_name': u'', u'size': 3000, u'pg_num': 256, u'name': u'metrics', u'pgp_num': 256}) => {
    "changed": false, 
    "cmd": [
        "docker", 
        "exec", 
        "ceph-mon-overcloud-controller-1", 
        "ceph", 
        "--cluster", 
        "ceph", 
        "osd", 
        "pool", 
        "create", 
        "metrics", 
        "256"
    ], 
    "delta": "0:00:00.935277", 
    "end": "2018-02-02 18:11:48.817739", 
    "failed_when_result": false, 
    "invocation": {
        "module_args": {
            "_raw_params": "docker exec ceph-mon-overcloud-controller-1 ceph --cluster ceph osd pool create metrics 256 ", 
            "_uses_shell": false, 
            "chdir": null, 
            "creates": null, 
            "executable": null, 
            "removes": null, 
            "stdin": null, 
            "warn": true
        }
    }, 
    "item": {
        "name": "metrics", 
        "pg_num": 256, 
        "pgp_num": 256, 
        "rule_name": "", 
        "size": 3000
    }, 
    "rc": 0, 
    "start": "2018-02-02 18:11:47.882462", 
    "stderr": "pool 'metrics' created", 
    "stderr_lines": [
        "pool 'metrics' created"
    ], 
    "stdout": "", 
    "stdout_lines": []
}

Comment 4 Sébastien Han 2018-03-05 08:52:37 UTC
This was never expected to work. The size key was added by OSPd, this is more an enhancement than a bug fix. Anyway, I think what you're calling size is "expected-num-objects". This can be easily exposed but only useful when running filestore.

Comment 6 Giulio Fidente 2018-03-15 13:30:06 UTC
The code in ceph-ansible master seems to have fixed this, we probably just need to rebase the beta-3.1 branch and make a new build to include the fix

Comment 7 Ken Dreyer (Red Hat) 2018-03-15 22:56:28 UTC
ceph-ansible master got a 3.1.0beta4 tag yesterday

ceph-ansible-3.1.0-0.beta4.1.el7 is available in 
storage7-ceph-luminous-candidate (http://cbs.centos.org/koji/buildinfo?buildID=22342)

ceph-ansible-3.1.0-0.1.beta4.el7cp is available in ceph-3.1-rhel-7-candidate

John and Giulio, would you please confirm this version fixes this bug?

Comment 11 John Fulton 2018-03-25 13:28:06 UTC
A deployment which uses ceph-ansible-3.1.0.0-0.beta4.1.el7.noarch and the following THT:

parameter_defaults:
  CephPools:
    - {"name": volumes, "pg_num": 32, "pgp_num": 32, "rule_name": 'replicated_rule', "erasure_profile": '', "size": 1000}
    - {"name": metrics, "pg_num": 32, "pgp_num": 32}

has with the following message in the /var/log/mistral/ceph-install-workflow.log

2018-03-23 22:37:17,591 p=2191 u=mistral |  ok: [192.168.24.6] => (item={u'name': u'volumes', u'rule_name': u'replicated_rule', u'pg_num': 32, u'pgp_num': 32, u'erasure_profile': u'', u'size': 1000})

and the deployment succeeds. I will submit a follow up PR just to rename size with expected-num-objects.

Comment 12 Sébastien Han 2018-04-05 13:25:21 UTC
will be in 3.1

Comment 15 Rachana Patel 2018-05-23 16:08:21 UTC
Verified with ceph-ansible-3.1.0-0.1.beta8.el7cp.noarch

Followed comment#11 and found message in /var/log/mistral/ceph-install-workflow.log

Hence moving to verified

Comment 17 errata-xmlrpc 2018-09-26 18:18:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2819


Note You need to log in before you can comment on or make changes to this bug.