When deploying RHOSP master branch for RDO, when we set the global ceph-ansible variables ``cephfs_data_pool` and `cephfs_metadata_pool` names to their default values for manila in OSP, namely ``manila_data`` and ``manila_metadata``, respectively, these values are not used at the ``assign application to cephfs pools`` task, which errors out with ENOENT on ``cephfs_data`` and ``cephfs_metadata`` pools. Version-Release number of selected component (if applicable): 4.0.0-0.1.rc9-el8cp How reproducible: Every time Steps to Reproduce: 1. Run RDO scenario004 job with OSP master branch in TripleO Actual results: OSP deployment fails during invocation of ceph-ansible with the error cited above. Expected results: OSP deployment will succeed, or at least ceph-ansible script will complete successfully. Additional info:
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.
*** Bug 1754635 has been marked as a duplicate of this bug. ***
Created attachment 1618342 [details] group-vars/all.txt with settings for the cephfs_data_pool and cephfs_metadata_pool names
Created attachment 1618343 [details] ceph-ansible log with ENOENT failures for cephfs_data and cephfs_metadata pool
``git blame roles/ceph-mds/tasks/create_mds_filesystems.yml`` and ``git log`` on the same file shows that the ``assign application to cephs pools`` show change c51e0b51d from 2018-04-10 merging, late, after e29fd842a69 from 14-05-2019. Both commits touched the command that is failing. To my eye the issue here is that at line 55 "{{ cephs_data }}" variable is getting resolved in the `with_items` and then that *value* is again resolved *again* in line 53 ast {{ item.name }}. And similarly for "{{ cephfs_metadata_pool }}" on line 56. Earlier versions of this file file which didn't have the combination of these two changes did not have the issue reported here -- that is, TripleO has been setting these two variables to ``manila_data`` and ``manila_metadata`` and ceph-ansible has been respecting these in this task. Note also that the correct values *are* getting picked up correctly in the two preceding tasks, ``customize pool size`` and ``customize pool min_size``.
In fact you're using a cephfs data variable structure not compatible with v4.0.0rc9 In rc9 you need to specify the cephfs_data and cephfs_metadata variables (name of the cephfs pools) [1] Because you're not doing this then it falls back to the default values and the task fails. So the application assignation task is working as expected with rc9 with the right value [2] Your cephfs_[meta]data_pool variables are using the application key in this dict which isn't present in that release. This was added in v4.0.0rc10. So you need to either to update your ceph-ansible version or modify your group_vars/all.yml to match the rc9 variable structure. [1] https://github.com/ceph/ceph-ansible/blob/v4.0.0rc9/roles/ceph-defaults/defaults/main.yml#L336-L337 [2] https://github.com/ceph/ceph-ansible/blob/v4.0.0rc9/roles/ceph-mds/tasks/create_mds_filesystems.yml#L52-L57