Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1754641

Summary: Failure to get cephfs_data_pool and cephfs_metadata_pool variable settings
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Tom Barron <tbarron>
Component: Ceph-AnsibleAssignee: Dimitri Savineau <dsavinea>
Status: CLOSED NOTABUG QA Contact: Vasishta <vashastr>
Severity: high Docs Contact:
Priority: high    
Version: 4.0CC: aschoen, ceph-eng-bugs, dsavinea, gfidente, gmeno, nthomas, ykaul
Target Milestone: rc   
Target Release: 4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-09-26 18:30:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1594251    
Attachments:
Description Flags
group-vars/all.txt with settings for the cephfs_data_pool and cephfs_metadata_pool names
none
ceph-ansible log with ENOENT failures for cephfs_data and cephfs_metadata pool none

Description Tom Barron 2019-09-23 19:26:43 UTC
When deploying RHOSP master branch for RDO, when we set the global ceph-ansible variables ``cephfs_data_pool` and `cephfs_metadata_pool` names to their default values for manila in OSP, namely ``manila_data`` and ``manila_metadata``, respectively, these values are not used at the ``assign application to cephfs pools`` task, which errors out with ENOENT on ``cephfs_data`` and ``cephfs_metadata`` pools.


Version-Release number of selected component (if applicable):

4.0.0-0.1.rc9-el8cp

How reproducible:

Every time

Steps to Reproduce:
1. Run RDO scenario004 job with OSP master branch in TripleO

Actual results:

OSP deployment fails during invocation of ceph-ansible with the error cited above.

Expected results:

OSP deployment will succeed, or at least ceph-ansible script will complete successfully.

Additional info:

Comment 1 RHEL Program Management 2019-09-23 19:26:50 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 2 Tom Barron 2019-09-23 19:27:19 UTC
*** Bug 1754635 has been marked as a duplicate of this bug. ***

Comment 3 Tom Barron 2019-09-23 19:30:44 UTC
Created attachment 1618342 [details]
group-vars/all.txt with settings for the cephfs_data_pool and cephfs_metadata_pool names

Comment 4 Tom Barron 2019-09-23 19:32:26 UTC
Created attachment 1618343 [details]
ceph-ansible log with ENOENT failures for cephfs_data and cephfs_metadata pool

Comment 5 Tom Barron 2019-09-23 19:53:47 UTC
``git blame roles/ceph-mds/tasks/create_mds_filesystems.yml`` and ``git log`` on the same file shows that the ``assign application to cephs pools`` show change c51e0b51d from 2018-04-10 merging, late, after e29fd842a69 from 14-05-2019.  Both commits touched the command that is failing.

To my eye the issue here is that at line 55 "{{ cephs_data }}" variable is getting resolved in the `with_items` and then that *value* is again resolved *again* in line 53 ast {{ item.name }}.  And similarly for "{{ cephfs_metadata_pool }}" on line 56.

Earlier versions of this file file which didn't have the combination of these two changes did not have the issue reported here -- that is, TripleO has been setting these two variables to ``manila_data`` and ``manila_metadata`` and ceph-ansible has been respecting these in this task.

Note also that the correct values *are* getting picked up correctly in the two preceding tasks, ``customize pool size`` and ``customize pool min_size``.

Comment 6 Dimitri Savineau 2019-09-26 18:30:20 UTC
In fact you're using a cephfs data variable structure not compatible with v4.0.0rc9

In rc9 you need to specify the cephfs_data and cephfs_metadata variables (name of the cephfs pools) [1]
Because you're not doing this then it falls back to the default values and the task fails.
So the application assignation task is working as expected with rc9 with the right value [2]

Your cephfs_[meta]data_pool variables are using the application key in this dict which isn't present in that release.
This was added in v4.0.0rc10.

So you need to either to update your ceph-ansible version or modify your group_vars/all.yml to match the rc9 variable structure.

[1] https://github.com/ceph/ceph-ansible/blob/v4.0.0rc9/roles/ceph-defaults/defaults/main.yml#L336-L337
[2] https://github.com/ceph/ceph-ansible/blob/v4.0.0rc9/roles/ceph-mds/tasks/create_mds_filesystems.yml#L52-L57