Bug 1754641 - Failure to get cephfs_data_pool and cephfs_metadata_pool variable settings
Summary: Failure to get cephfs_data_pool and cephfs_metadata_pool variable settings
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 4.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 4.0
Assignee: Dimitri Savineau
QA Contact: Vasishta
URL:
Whiteboard:
: 1754635 (view as bug list)
Depends On:
Blocks: 1594251
TreeView+ depends on / blocked
 
Reported: 2019-09-23 19:26 UTC by Tom Barron
Modified: 2019-09-26 18:30 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-09-26 18:30:20 UTC
Embargoed:


Attachments (Terms of Use)
group-vars/all.txt with settings for the cephfs_data_pool and cephfs_metadata_pool names (3.50 KB, text/plain)
2019-09-23 19:30 UTC, Tom Barron
no flags Details
ceph-ansible log with ENOENT failures for cephfs_data and cephfs_metadata pool (1.07 MB, text/plain)
2019-09-23 19:32 UTC, Tom Barron
no flags Details

Description Tom Barron 2019-09-23 19:26:43 UTC
When deploying RHOSP master branch for RDO, when we set the global ceph-ansible variables ``cephfs_data_pool` and `cephfs_metadata_pool` names to their default values for manila in OSP, namely ``manila_data`` and ``manila_metadata``, respectively, these values are not used at the ``assign application to cephfs pools`` task, which errors out with ENOENT on ``cephfs_data`` and ``cephfs_metadata`` pools.


Version-Release number of selected component (if applicable):

4.0.0-0.1.rc9-el8cp

How reproducible:

Every time

Steps to Reproduce:
1. Run RDO scenario004 job with OSP master branch in TripleO

Actual results:

OSP deployment fails during invocation of ceph-ansible with the error cited above.

Expected results:

OSP deployment will succeed, or at least ceph-ansible script will complete successfully.

Additional info:

Comment 1 RHEL Program Management 2019-09-23 19:26:50 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 2 Tom Barron 2019-09-23 19:27:19 UTC
*** Bug 1754635 has been marked as a duplicate of this bug. ***

Comment 3 Tom Barron 2019-09-23 19:30:44 UTC
Created attachment 1618342 [details]
group-vars/all.txt with settings for the cephfs_data_pool and cephfs_metadata_pool names

Comment 4 Tom Barron 2019-09-23 19:32:26 UTC
Created attachment 1618343 [details]
ceph-ansible log with ENOENT failures for cephfs_data and cephfs_metadata pool

Comment 5 Tom Barron 2019-09-23 19:53:47 UTC
``git blame roles/ceph-mds/tasks/create_mds_filesystems.yml`` and ``git log`` on the same file shows that the ``assign application to cephs pools`` show change c51e0b51d from 2018-04-10 merging, late, after e29fd842a69 from 14-05-2019.  Both commits touched the command that is failing.

To my eye the issue here is that at line 55 "{{ cephs_data }}" variable is getting resolved in the `with_items` and then that *value* is again resolved *again* in line 53 ast {{ item.name }}.  And similarly for "{{ cephfs_metadata_pool }}" on line 56.

Earlier versions of this file file which didn't have the combination of these two changes did not have the issue reported here -- that is, TripleO has been setting these two variables to ``manila_data`` and ``manila_metadata`` and ceph-ansible has been respecting these in this task.

Note also that the correct values *are* getting picked up correctly in the two preceding tasks, ``customize pool size`` and ``customize pool min_size``.

Comment 6 Dimitri Savineau 2019-09-26 18:30:20 UTC
In fact you're using a cephfs data variable structure not compatible with v4.0.0rc9

In rc9 you need to specify the cephfs_data and cephfs_metadata variables (name of the cephfs pools) [1]
Because you're not doing this then it falls back to the default values and the task fails.
So the application assignation task is working as expected with rc9 with the right value [2]

Your cephfs_[meta]data_pool variables are using the application key in this dict which isn't present in that release.
This was added in v4.0.0rc10.

So you need to either to update your ceph-ansible version or modify your group_vars/all.yml to match the rc9 variable structure.

[1] https://github.com/ceph/ceph-ansible/blob/v4.0.0rc9/roles/ceph-defaults/defaults/main.yml#L336-L337
[2] https://github.com/ceph/ceph-ansible/blob/v4.0.0rc9/roles/ceph-mds/tasks/create_mds_filesystems.yml#L52-L57


Note You need to log in before you can comment on or make changes to this bug.