Description of problem:
Error message when setting osd_scenario is missed. Not defining osd_scenario must be handled and user understandable error must be displayed.
Version-Release number of selected component (if applicable):
ceph-ansible-3.2.0-0.1.rc4.el7cp.noarch
How reproducible:
Always
Steps to Reproduce:
1. Configure ceph-ansible to initialise a cluster and don't set any value to osd_scenario.
2. Run playbook
Actual results:
TASK [ceph-validate : validate provided configuration]
.........
Notario Failure: key did not match schema (required key in data is missing: <function validate_osd_scenarios at 0x7fa0f09bf500>)
The full traceback is:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 139, in run
res = self._execute()
File "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", line 584, in _execute
result = self._handler.run(task_vars=variables)
File "/usr/share/ceph-ansible/plugins/actions/validate.py", line 100, in run
msg = "[{}] Validation failed for variable: {}".format(host, error.path[0])
IndexError: list index out of range
fatal: [node07]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
.......
[ERROR]: [magna032] Validation failed for variable: item[0]
[ERROR]: [magna032] Reason: -> item[0] key did not match 'osd_scenario' (required item in schema is missing: osd_scenario)
fatal: [magna032]: FAILED! => {
"changed": false,
"msg": "[node32] Validation failed for variable: item[0]\n[magna032] Reason: -> item[0] key did not match 'osd_scenario' (required item in schema is missing: osd_scenario)\n",
"stderr_lines": [
"[node32] Validation failed for variable: item[0]"
Expected results:
Appropriate error message must be displayed when a variable is not set
Additional info:
(Error message for node24 and node7 were same)
$ ssh ubuntu@node07 cat /etc/ansible/hosts
[mons]
node07
node24
node32
[osds]
node24 osd_auto_discovery='true' devices="['/dev/sdb']"
node07 dmcrypt='true' devices="['/dev/sdb']"
node32 devices="['/dev/sdb','/dev/sdc']"
[mgrs]
node07
$ cat /usr/share/ceph-ansible/group_vars/all.yml| egrep -v ^# | grep -v ^$
---
dummy:
fetch_directory: ~/ceph-ansible-keys
ceph_origin: distro
ceph_repository: rhcs
ceph_rhcs_version: 3
ceph_docker_image: "rhceph"
ceph_docker_image_tag: "ceph-3.2-rhel-7-containers-candidate-38068-20181129153049"
ceph_docker_registry: "brew-pulp-docker01........"
monitor_interface: eno1
public_network: 10.8.128.0/21
containerized_deployement: True
Comment 4Giridhar Ramaraju
2019-08-05 13:10:32 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate.
Regards,
Giri
Comment 5Giridhar Ramaraju
2019-08-05 13:11:34 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate.
Regards,
Giri
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2019:4353