Bug 1546396

Summary: Upgrade from OSP11 to 12 with ceph deployed from director fails at resources.WorkflowTasks_Step2_Execution
Product: Red Hat OpenStack Reporter: david.costakos
Component: ceph-ansibleAssignee: Sébastien Han <shan>
Status: CLOSED DUPLICATE QA Contact: Yogev Rabl <yrabl>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 12.0 (Pike)CC: gfidente, mbracho
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-02-19 09:13:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description david.costakos 2018-02-16 23:11:07 UTC
Description of problem:
Upgrading an OSP11 cloud to OSP12 where Ceph deploys in director seems to fail 100% of the time on resources.WorkflowTasks_Step2 execution.  This appears to be a mistral playbook that runs 'ansible-playbook /usr/share/ceph-ansible/infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml --user tripleo-admin --become --become-user root --forks 8 --i
nventory-file /tmp/ansible-mistral-actioneqXCDD/inventory.yaml --private-key /tmp/ansible-mistral-actioneqXCDD/ssh_private_key --skip-tags package-install,with_pkg'

Version-Release number of selected component (if applicable):

ceph-ansible-3.0.14-1.el7cp.noarch
How reproducible:

100% for me
Steps to Reproduce:
1. deploy osp11 cloud with ceph from director
2. attempt to ugprade following online documentaion
3. results in this stack error during upgrade
$ openstack stack failures list dcostako --long

dcostako.AllNodesDeploySteps.AllNodesPostUpgradeSteps.WorkflowTasks_Step2_Execution:
  resource_type: OS::Mistral::ExternalResource
  physical_resource_id: 8aa5b012-7226-48da-882f-e8ea45fb1c3c
  status: CREATE_FAILED
  status_reason: |
    resources.WorkflowTasks_Step2_Execution: ERROR

Actual results:
failed upgrade

Expected results:
full message from /var/log/mistral/executor.log
Stdout: u'\nPLAY [confirm whether user really meant to switch from non-containerized to containerized ceph daemons] ***\n\nTASK [exit playbook, if user did not mean to switch from non-containerized to containerized daemons?] ***\nfatal: [localhost]: FAILED! => {"changed": false, "msg": "\\"Exiting switch-from-non-containerized-to-containerized-ceph-daemons.yml playbook,\\n cluster did not switch from non-containerized to containerized ceph daemons.\\n To switch from non-containerized to containerized ceph daemons, either say \'yes\' on the prompt or\\n or use `-e ireallymeanit=yes` on the command line when\\n invoking the playbook\\"\\n"}\n\nPLAY RECAP *********************************************************************\nlocalhost                  : ok=0    changed=0    unreachable=0    failed=1   \n\n'
Stderr: u"[DEPRECATION WARNING]: The use of 'include' for tasks has been deprecated. Use \n'import_tasks' for static inclusions or 'include_tasks' for dynamic inclusions.\n This feature will be removed in a future release. Deprecation warnings can be \ndisabled by setting deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: include is kept for backwards compatibility but usage is\n discouraged. The module documentation details page may explain more about this\n rationale.. This feature will be removed in a future release. Deprecation \nwarnings can be disabled by setting deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: The use of 'static' has been deprecated. Use \n'import_tasks' for static inclusion, or 'include_tasks' for dynamic inclusion. \nThis feature will be removed in a future release. Deprecation warnings can be \ndisabled by setting deprecation_warnings=False in ansible.cfg.\n[DEPRECATION WARNING]: docker is kept for backwards compatibility but usage is \ndiscouraged. The module documentation details page may explain more about this \nrationale.. This feature will be removed in a future release. Deprecation \nwarnings can be disabled by setting deprecation_warnings=False in ansible.cfg.\n [WARNING]: Not prompting as we are not in interactive mode\n"
2018-02-16 17:53:49.606 32161 ERROR mistral.executors.default_executor Traceback (most recent call last):
2018-02-16 17:53:49.606 32161 ERROR mistral.executors.default_executor   File "/usr/lib/python2.7/site-packages/mistral/executors/default_executor.py", line 109, in run_action
2018-02-16 17:53:49.606 32161 ERROR mistral.executors.default_executor     result = action.run(context.ctx())
2018-02-16 17:53:49.606 32161 ERROR mistral.executors.default_executor   File "/usr/lib/python2.7/site-packages/tripleo_common/actions/ansible.py", line 456, in run
2018-02-16 17:53:49.606 32161 ERROR mistral.executors.default_executor     log_errors=processutils.LogErrors.ALL)
2018-02-16 17:53:49.606 32161 ERROR mistral.executors.default_executor   File "/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line 419, in execute
2018-02-16 17:53:49.606 32161 ERROR mistral.executors.default_executor     cmd=sanitized_cmd)
2018-02-16 17:53:49.606 32161 ERROR mistral.executors.default_executor ProcessExecutionError: Unexpected error while running command.
2018-02-16 17:53:49.606 32161 ERROR mistral.executors.default_executor Command: ansible-playbook /usr/share/ceph-ansible/infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml --user tripleo-admin --become --become-user root --forks 8 --inventory-file /tmp/ansible-mistral-actioneqXCDD/inventory.yaml --private-key /tmp/ansible-mistral-actioneqXCDD/ssh_private_key --skip-tags package-install,with_pkg
2018-02-16 17:53:49.606 32161 ERROR mistral.executors.default_executor Exit code: 2
2018-02-16 17:53:49.606 32161 ERROR mistral.executors.default_executor Stdout: u'\nPLAY [confirm whether user really meant to switch from non-containerized to containerized ceph daemons] ***\n\nTASK [exit playbook, if user did not mean to switch from non-containerized to containerized daemons?] ***\nfatal: [localhost]: FAILED! => {"changed": false, "msg": "\\"Exiting switch-from-non-containerized-to-containerized-ceph-daemons.yml playbook,\\n cluster did not switch from non-containerized to containerized ceph daemons.\\n To switch from non-containerized to containerized ceph daemons, either say \'yes\' on the


Additional info:

$ openstack workflow execution list | grep -v SUCCESS
+--------------------------------------+--------------------------------------+------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+---------+------------------------------+---------------------+---------------------+
| ID                                   | Workflow ID                          | Workflow name                                                          | Description                                                                                                                                                                                                                       | Task Execution ID                    | State   | State info                   | Created at          | Updated at          |
+--------------------------------------+--------------------------------------+------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+---------+------------------------------+---------------------+---------------------+
| ce980263-0675-4324-9aef-5c8dbfec2270 | d2783dda-73d0-4f06-8936-f16ab7d82604 | tripleo.baremetal.v1.register_or_update                                |                                                                                                                                                                                                                                   | <none>                               | ERROR   | Failure caused by error i... | 2018-02-16 17:46:35 | 2018-02-16 17:46:38 |
| 303f92bc-41cb-4c5e-84f6-2c799c704130 | c6df2808-fab2-4dab-87f8-c643193e23df | tripleo.validations.v1.run_groups                                      |                                                                                                                                                                                                                                   | <none>                               | ERROR   | None                         | 2018-02-16 21:43:10 | 2018-02-16 21:43:22 |
| 6c974307-19a7-48ba-b961-e4ed6ca95429 | 05df6024-f940-4651-baee-bc481f2aa408 | tripleo.validations.v1.run_validation                                  | sub-workflow execution                                                                                                                                                                                                            | e14f2a77-3036-48a3-b0dd-d4eeba51981b | ERROR   | None                         | 2018-02-16 21:43:11 | 2018-02-16 21:43:20 |
| be580f29-727f-44cc-8b65-f2a20345d898 | 05df6024-f940-4651-baee-bc481f2aa408 | tripleo.validations.v1.run_validation                                  | sub-workflow execution                                                                                                                                                                                                            | e14f2a77-3036-48a3-b0dd-d4eeba51981b | ERROR   | None                         | 2018-02-16 21:43:11 | 2018-02-16 21:43:20 |
| 8aa5b012-7226-48da-882f-e8ea45fb1c3c | 9e931c4f-fb52-4eef-8d3b-1cb5352280a1 | tripleo.dcostako.workflow_tasks.step2                                  | Heat managed                                                                                                                                                                                                                      | <none>                               | ERROR   | Failure caused by error i... | 2018-02-16 22:52:49 | 2018-02-16 22:53:52 |
| 8aa0931c-deb0-451d-86bf-20b5d1a31662 | 0ac2121a-c15b-43c7-b864-8876822287a8 | tripleo.storage.v1.ceph-install                                        | sub-workflow execution                                                                                                                                                                                                            | fd44d16a-d323-4ee8-8884-d534e40627dd | ERROR   | Failure caused by error i... | 2018-02-16 22:52:50 | 2018-02-16 22:53:50 |
+--------------------------------------+--------------------------------------+------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+---------+------------------------------+---------------------+---------------------+

$ openstack workflow execution output show 8aa0931c-deb0-451d-86bf-20b5d1a31662 | grep result
    "result": "Failure caused by error in tasks: ceph_install\n\n  ceph_install [task_ex_id=86b18dca-8c3a-4c32-8d54-e53df3bd25c4] -> Failed to run action [action_ex_id=a6c9aa82-75c3-4fd3-87a8-5e6d9464f35b, action_cls='<class 'mistral.actions.action_factory.AnsiblePlaybookAction'>', attributes='{}', params='{u'remote_user': u'tripleo-admin', u'become_user': u'root', u'inventory': {u'all': {u'vars': {u'monitor_secret': u'AQBhGodaAAAAABAABN1/ONXJpJb/xUKzbgxZzw==', u'ceph_conf_overrides': {u'global': {u'rgw_s3_auth_use_keystone': u'true', u'rgw_keystone_admin_password': u'MM23Bu2Qmr2b4yAXnnHaHgHqe', u'osd_pool_default_pgp_num': 128, u'rgw_keystone_url': u'http://172.16.1.11:5000', u'rgw_keystone_admin_project': u'service', u'rgw_keystone_accepted_roles': u'Member, _member_, admin', u'osd_pool_default_size': 3, u'osd_pool_default_pg_num': 128, u'rgw_keystone_api_version': 3, u'rgw_keystone_admin_user': u'swift', u'rgw_keystone_admin_domain': u'default'}}, u'osd_scenario': u'collocated', u'fetch_directory': u'/tmp/file-mistral-action3N70PC', u'user_config': True, u'ceph_docker_image_tag': u'latest', u'ceph_release': u'jewel', u'containerized_deployment': True, u'public_network': u'172.16.3.0/24', u'generate_fsid': False, u'monitor_address_block': u'172.16.3.0/24', u'admin_secret': u'AQBhGodaAAAAABAAypN99arfRCPXK1clqkPbjg==', u'keys': [{u'mon_cap': u'allow r', u'osd_cap': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics', u'mode': u'0644', u'key': u'AQBhGodaAAAAABAABL36T1KkSYi1NVGeGlHaeQ==', u'name': u'client.openstack'}, {u'mon_cap': u'allow r, allow command \\\\\\\\\\\\\"auth del\\\\\\\\\\\\\", allow command \\\\\\\\\\\\\"auth caps\\\\\\\\\\\\\", allow command \\\\\\\\\\\\\"auth get\\\\\\\\\\\\\", allow command \\\\\\\\\\\\\"auth get-or-create\\\\\\\\\\\\\"', u'mds_cap': u'allow *', u'name': u'client.manila', u'mode': u'0644', u'key': u'AQBhGodaAAAAABAAox8AQrj84SbIzXETRyrSKA==', u'osd_cap': u'allow rw'}, {u'mon_cap': u'allow rw', u'osd_cap': u'allow rwx', u'mode': u'0644', u'key': u'AQBhGodaAAAAABAAd+6VEpelsJGGyk6ib+jkGw==', u'name': u'client.radosgw'}], u'openstack_keys': [{u'mon_cap': u'allow r', u'osd_cap': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics', u'mode': u'0644', u'key': u'AQBhGodaAAAAABAABL36T1KkSYi1NVGeGlHaeQ==', u'name': u'client.openstack'}, {u'mon_cap': u'allow r, allow command \\\\\\\\\\\\\"auth del\\\\\\\\\\\\\", allow command \\\\\\\\\\\\\"auth caps\\\\\\\\\\\\\", allow command \\\\\\\\\\\\\"auth get\\\\\\\\\\\\\", allow command \\\\\\\\\\\\\"auth get-or-create\\\\\\\\\\\\\"', u'mds_cap': u'allow *', u'name': u'client.manila', u'mode': u'0644', u'key': u'AQBhGodaAAAAABAAox8AQrj84SbIzXETRyrSKA==', u'osd_cap': u'allow rw'}, {u'mon_cap': u'allow rw', u'osd_cap': u'allow rwx', u'mode': u'0644', u'key': u'AQBhGodaAAAAABAAd+6VEpelsJGGyk6ib+jkGw==', u'name': u'client.radosgw'}], u'osd_objectstore': u'filestore', u'pools': [], u'ntp_service_enabled': False, u'ceph_docker_image': u'rhceph/rhceph-2-rhel7', u'docker': True, u'fsid': u'2aae9458-1342-11e8-bd0d-fa163e85816b', u'journal_size': 512, u'openstack_config': True, u'ceph_docker_registry': u'registry.access.redhat.com', u'ceph_stable': True, u'devices': [u'/dev/vdb'], u'ceph_origin': u'distro', u'openstack_pools': [{u'rule_name': u'', u'pg_num': 128, u'name': u'images'}, {u'rule_name': u'', u'pg_num': 128, u'name': u'metrics'}, {u'rule_name': u'', u'pg_num': 128, u'name': u'backups'}, {u'rule_name': u'', u'pg_num': 128, u'name': u'vms'}, {u'rule_name': u'', u'pg_num': 128, u'name': u'volumes'}], u'ip_version': u'ipv4', u'ireallymeanit': u'yes', u'cluster_network': u'172.16.4.0/24'}}, u'clients': {u'hosts': {u'172.16.0.116': {}, u'172.16.0.105': {}}}, u'osds': {u'hosts': {u'172.16.0.106': {}, u'172.16.0.109': {}, u'172.16.0.101': {}}}, u'mons': {u'hosts': {u'172.16.0.104': {}, u'172.16.0.110': {}, u'172.16.0.103': {}}}, u'mdss': {u'hosts': {}}, u'rgws': {u'hosts': {}}}, u'verbosity': 0, u'extra_env_variables': {u'ANSIBLE_LIBRARY': u'/usr/share/ceph-ansible/library/', u'ANSIBLE_RETRY_FILES_ENABLED': u'False', u'ANSIBLE_CONFIG': u'/usr/share/ceph-ansible/ansible.cfg', u'ANSIBLE_LOG_PATH': u'/var/log/mistral/ceph-install-workflow.log', u'ANSIBLE_ROLES_PATH': u'/usr/share/ceph-ansible/roles/', u'ANSIBLE_ACTION_PLUGINS': u'/usr/share/ceph-ansible/plugins/actions/', u'ANSIBLE_SSH_RETRIES': u'3', u'ANSIBLE_HOST_KEY_CHECKING': u'False'}, u'skip_tags': u'package-install,with_pkg', u'ssh_private_key': u'-----BEGIN RSA PRIVATE KEY-----\\nMIIEogIBAAKCAQEAsEJvYdglbsYMtL0ur8KBcRm/0r2saHEr0zb8G1xapIY9YkMU\\n4AF5mMydpwFGICpFtzvhtnmqezSOPi+/pUgT7wVkPUES52GSPDe4H8XcgO8BSndm\\ni4JtOFb04BsXuoUPa/g3L0FddIYlu+/j92+6N0uFhNrqC/oRtGzh4kgXv6eJQlj/\\nbRkpaA2NzmyCLA6Lv46TZHdLcXw1Q4zCcV+qwMDI+Tf/GoWfYY3kzFP8/xM0Kd3f\\nmB2baPPxrBKC97i+6w7I3SlmzH0Y8TrlCZ9WmDj2iNf92KlQSgnHY6AwTINqpX7K\\ngl5kAlXlyWa8PJ5PhS0O7yIdmq5i093C/MYQlwIDAQABAoIBAF3i9Wt78+x+iDQZ\\n9W1fwQ1atuftapGzfrGiP0XfutSaQMY/jzYG8xtmGq/jqNPnUH1a408MnbfE9ePA\\nEWhb7WpLR+qs6AHh4kA7OdOK1HrFVL2yvief0MfK4eMh61DKIb3UWKjOO5afAiiK\\njra1h85+Zt+usC6zBI1D1kpvNl86XobkDYa2Z+4rFdMi7XvnVmm9wz4RZt45wydB\\nmexNHSlXG/MOqnryCmEGeOazlHqwAdzHBAHOxFZ3fIbqqeGxc/3kSx1ZzD0oLUj5\\nIRhGnsaacUvPX5siaobctLrbBf9j2RCI+vtfKq2GIlzn02iio+JR+ARpDP7DNl4r\\ne6QHisECgYEA5iVTXK28IdcY+Nj3Lixvr56nvMz1Pily7C3QVLog+vIg9z8hXYEX\\neOkJO05+VkAwMnp2moTcn7wJD+W5Rl2FjzPtfETbCRPqKSZl/FC2LNxZKIRD4/Tu\\nOiooVHhAKZhUPpog1B4I1b6ocHjjoPhzHCOUY9OSWe9U0LCRVOIyyjcCgYEAxA9q\\nFb5YNPK5OcvZxpzbNCZxxqwf6tyCkauYS7vXtVrqqyyb3fLGqVV+xiCDnoBBn/Kk\\n0Bop3Cet2pPsTR8sVP7ew35COZ6F2cqJCadGsxX/4a8gGV9dFwoGi0MVdFO8tbIp\\n75WA1FgaMrKybwRiBWMBZL3KHONlrWDkeVAQPKECgYAGo/8SxoSOKWm0DHadY3TZ\\niWdnoDZXU9TYEb5YI4K+Guxulei9jPMDbx3wEyS8EmARpMz1Sm4fQcq1JbjB2gL3\\njdUFZ+s2CNgR1eTNcfq/sp/z9lULJ88T6JF/VnTrflS39bSKyk8Q885iaGqRA3o2\\nzqQCeWFYrPoyh1W6MEis4wKBgFOdd1LaoOfD9Lbvd2s7DkmJc9CVK++QJ6dUlVkH\\nPZG8uoRSPA9GMO+a5Lw+taNtc49xflS6M8wOqBimKYsilleRcxPQzxGfx9oAhL03\\nN/G8mip386qefycKQYw3CflYlQywdS4WhqEJCfNBPtQV/G/rr3Z1crMrT/vHbOlH\\n+gTBAoGAekJ5kbYjcKU85EqrfmPEo8DeKzjtETNj+e2xAQWCBiKFZkaNzMC6kIVJ\\nmeE0nJHdVS1TPDDcMK0t6urXD4g3W5yfxsyTachU2vfpe65y7Zl6sFayg44pLvy9\\nod1k3JBJbh24HEzTfzayTR7dazub8Oe6Nr44i0+y7cH6ivuu71M=\\n-----END RSA PRIVATE KEY-----\\n', u'become': True, u'forks': 8, u'playbook': u'/usr/share/ceph-ansible/infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml'}']\n Unexpected error while running command.\nCommand: ansible-playbook /usr/share/ceph-ansible/infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml --user tripleo-admin --become --become-user root --forks 8 --inventory-file /tmp/ansible-mistral-actioneqXCDD/inventory.yaml --private-key /tmp/ansible-mistral-actioneqXCDD/ssh_private_key --skip-tags package-install,with_pkg\nExit code: 2\nStdout: u'\\nPLAY [confirm whether user really meant to switch from non-containerized to containerized ceph daemons] ***\\n\\nTASK [exit playbook, if user did not mean to switch from non-containerized to containerized daemons?] ***\\nfatal: [localhost]: FAILED! => {\"changed\": false, \"msg\": \"\\\\\"Exiting switch-from-non-containerized-to-containerized-ceph-daemons.yml playbook,\\\\n cluster did not switch from non-containerized to containerized ceph daemons.\\\\n To switch from non-containerized to containerized ceph daemons, either say \\'yes\\' on the prompt or\\\\n or use `-e ireallymeanit=yes` on the command line when\\\\n invoking the playbook\\\\\"\\\\n\"}\\n\\nPLAY RECAP *********************************************************************\\nlocalhost                  : ok=0    changed=0    unreachable=0    failed=1   \\n\\n'\nStderr: u\"[DEPRECATION WARNING]: The use of 'include' for tasks has been deprecated. Use \\n'import_tasks' for static inclusions or 'include_tasks' for dynamic inclusions.\\n This feature will be removed in a future release. Deprecation warnings can be \\ndisabled by setting deprecation_warnings=False in ansible.cfg.\\n[DEPRECATION WARNING]: include is kept for backwards compatibility but usage is\\n discouraged. The module documentation details page may explain more about this\\n rationale.. This feature will be removed in a future release. Deprecation \\nwarnings can be disabled by setting deprecation_warnings=False in ansible.cfg.\\n[DEPRECATION WARNING]: The use of 'static' has been deprecated. Use \\n'import_tasks' for static inclusion, or 'include_tasks' for dynamic inclusion. \\nThis feature will be removed in a future release. Deprecation warnings can be \\ndisabled by setting deprecation_warnings=False in ansible.cfg.\\n[DEPRECATION WARNING]: docker is kept for backwards compatibility but usage is \\ndiscouraged. The module documentation details page may explain more about this \\nrationale.. This feature will be removed in a future release. Deprecation \\nwarnings can be disabled by setting deprecation_warnings=False in ansible.cfg.\\n [WARNING]: Not prompting as we are not in interactive mode\\n\"\n    [action_ex_id=a6c9aa82-75c3-4fd3-87a8-5e6d9464f35b, idx=0]: Failed to run action [action_ex_id=a6c9aa82-75c3-4fd3-87a8-5e6d9464f35b, action_cls='<class 'mistral.actions.action_factory.AnsiblePlaybookAction'>', attributes='{}', params='{u'remote_user': u'tripleo-admin', u'become_user': u'root', u'inventory': {u'all': {u'vars': {u'monitor_secret': u'AQBhGodaAAAAABAABN1/ONXJpJb/xUKzbgxZzw==', u'ceph_conf_overrides': {u'global': {u'rgw_s3_auth_use_keystone': u'true', u'rgw_keystone_admin_password': u'MM23Bu2Qmr2b4yAXnnHaHgHqe', u'osd_pool_default_pgp_num': 128, u'rgw_keystone_url': u'http://172.16.1.11:5000', u'rgw_keystone_admin_project': u'service', u'rgw_keystone_accepted_roles': u'Member, _member_, admin', u'osd_pool_default_size': 3, u'osd_pool_default_pg_num': 128, u'rgw_keystone_api_version': 3, u'rgw_keystone_admin_user': u'swift', u'rgw_keystone_admin_domain': u'default'}}, u'osd_scenario': u'collocated', u'fetch_directory': u'/tmp/file-mistral-action3N70PC', u'user_config': True, u'ceph_docker_image_tag': u'latest', u'ceph_release': u'jewel', u'containerized_deployment': True, u'public_network': u'172.16.3.0/24', u'generate_fsid': False, u'monitor_address_block': u'172.16.3.0/24', u'admin_secret': u'AQBhGodaAAAAABAAypN99arfRCPXK1clqkPbjg==', u'keys': [{u'mon_cap': u'allow r', u'osd_cap': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics', u'mode': u'0644', u'key': u'AQBhGodaAAAAABAABL36T1KkSYi1NVGeGlHaeQ==', u'name': u'client.openstack'}, {u'mon_cap': u'allow r, allow command \\\\\\\\\\\\\"auth del\\\\\\\\\\\\\", allow command \\\\\\\\\\\\\"auth caps\\\\\\\\\\\\\", allow command \\\\\\\\\\\\\"auth get\\\\\\\\\\\\\", allow command \\\\\\\\\\\\\"auth get-or-create\\\\\\\\\\\\\"', u'mds_cap': u'allow *', u'name': u'client.manila', u'mode': u'0644', u'key': u'AQBhGodaAAAAABAAox8AQrj84SbIzXETRyrSKA==', u'osd_cap': u'allow rw'}, {u'mon_cap': u'allow rw', u'osd_cap': u'allow rwx', u'mode': u'0644', u'key': u'AQBhGodaAAAAABAAd+6VEpelsJGGyk6ib+jkGw==', u'name': u'client.radosgw'}], u'openstack_keys': [{u'mon_cap': u'allow r', u'osd_cap': u'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=backups, allow rwx pool=vms, allow rwx pool=images, allow rwx pool=metrics', u'mode': u'0644', u'key': u'AQBhGodaAAAAABAABL36T1KkSYi1NVGeGlHaeQ==', u'name': u'client.openstack'}, {u'mon_cap': u'allow r, allow command \\\\\\\\\\\\\"auth del\\\\\\\\\\\\\", allow command \\\\\\\\\\\\\"auth caps\\\\\\\\\\\\\", allow command \\\\\\\\\\\\\"auth get\\\\\\\\\\\\\", allow command \\\\\\\\\\\\\"auth get-or-create\\\\\\\\\\\\\"', u'mds_cap': u'allow *', u'name': u'client.manila', u'mode': u'0644', u'key': u'AQBhGodaAAAAABAAox8AQrj84SbIzXETRyrSKA==', u'osd_cap': u'allow rw'}, {u'mon_cap': u'allow rw', u'osd_cap': u'allow rwx', u'mode': u'0644', u'key': u'AQBhGodaAAAAABAAd+6VEpelsJGGyk6ib+jkGw==', u'name': u'client.radosgw'}], u'osd_objectstore': u'filestore', u'pools': [], u'ntp_service_enabled': False, u'ceph_docker_image': u'rhceph/rhceph-2-rhel7', u'docker': True, u'fsid': u'2aae9458-1342-11e8-bd0d-fa163e85816b', u'journal_size': 512, u'openstack_config': True, u'ceph_docker_registry': u'registry.access.redhat.com', u'ceph_stable': True, u'devices': [u'/dev/vdb'], u'ceph_origin': u'distro', u'openstack_pools': [{u'rule_name': u'', u'pg_num': 128, u'name': u'images'}, {u'rule_name': u'', u'pg_num': 128, u'name': u'metrics'}, {u'rule_name': u'', u'pg_num': 128, u'name': u'backups'}, {u'rule_name': u'', u'pg_num': 128, u'name': u'vms'}, {u'rule_name': u'', u'pg_num': 128, u'name': u'volumes'}], u'ip_version': u'ipv4', u'ireallymeanit': u'yes', u'cluster_network': u'172.16.4.0/24'}}, u'clients': {u'hosts': {u'172.16.0.116': {}, u'172.16.0.105': {}}}, u'osds': {u'hosts': {u'172.16.0.106': {}, u'172.16.0.109': {}, u'172.16.0.101': {}}}, u'mons': {u'hosts': {u'172.16.0.104': {}, u'172.16.0.110': {}, u'172.16.0.103': {}}}, u'mdss': {u'hosts': {}}, u'rgws': {u'hosts': {}}}, u'verbosity': 0, u'extra_env_variables': {u'ANSIBLE_LIBRARY': u'/usr/share/ceph-ansible/library/', u'ANSIBLE_RETRY_FILES_ENABLED': u'False', u'ANSIBLE_CONFIG': u'/usr/share/ceph-ansible/ansible.cfg', u'ANSIBLE_LOG_PATH': u'/var/log/mistral/ceph-install-workflow.log', u'ANSIBLE_ROLES_PATH': u'/usr/share/ceph-ansible/roles/', u'ANSIBLE_ACTION_PLUGINS': u'/usr/share/ceph-ansible/plugins/actions/', u'ANSIBLE_SSH_RETRIES': u'3', u'ANSIBLE_HOST_KEY_CHECKING': u'False'}, u'skip_tags': u'package-install,with_pkg', u'ssh_private_key': u'-----BEGIN RSA PRIVATE KEY-----\\nMIIEogIBAAKCAQEAsEJvYdglbsYMtL0ur8KBcRm/0r2saHEr0zb8G1xapIY9YkMU\\n4AF5mMydpwFGICpFtzvhtnmqezSOPi+/pUgT7wVkPUES52GSPDe4H8XcgO8BSndm\\ni4JtOFb04BsXuoUPa/g3L0FddIYlu+/j92+6N0uFhNrqC/oRtGzh4kgXv6eJQlj/\\nbRkpaA2NzmyCLA6Lv46TZHdLcXw1Q4zCcV+qwMDI+Tf/GoWfYY3kzFP8/xM0Kd3f\\nmB2baPPxrBKC97i+6w7I3SlmzH0Y8TrlCZ9WmDj2iNf92KlQSgnHY6AwTINqpX7K\\ngl5kAlXlyWa8PJ5PhS0O7yIdmq5i093C/MYQlwIDAQABAoIBAF3i9Wt78+x+iDQZ\\n9W1fwQ1atuftapGzfrGiP0XfutSaQMY/jzYG8xtmGq/jqNPnUH1a408MnbfE9ePA\\nEWhb7WpLR+qs6AHh4kA7OdOK1HrFVL2yvief0MfK4eMh61DKIb3UWKjOO5afAiiK\\njra1h85+Zt+usC6zBI1D1kpvNl86XobkDYa2Z+4rFdMi7XvnVmm9wz4RZt45wydB\\nmexNHSlXG/MOqnryCmEGeOazlHqwAdzHBAHOxFZ3fIbqqeGxc/3kSx1ZzD0oLUj5\\nIRhGnsaacUvPX5siaobctLrbBf9j2RCI+vtfKq2GIlzn02iio+JR+ARpDP7DNl4r\\ne6QHisECgYEA5iVTXK28IdcY+Nj3Lixvr56nvMz1Pily7C3QVLog+vIg9z8hXYEX\\neOkJO05+VkAwMnp2moTcn7wJD+W5Rl2FjzPtfETbCRPqKSZl/FC2LNxZKIRD4/Tu\\nOiooVHhAKZhUPpog1B4I1b6ocHjjoPhzHCOUY9OSWe9U0LCRVOIyyjcCgYEAxA9q\\nFb5YNPK5OcvZxpzbNCZxxqwf6tyCkauYS7vXtVrqqyyb3fLGqVV+xiCDnoBBn/Kk\\n0Bop3Cet2pPsTR8sVP7ew35COZ6F2cqJCadGsxX/4a8gGV9dFwoGi0MVdFO8tbIp\\n75WA1FgaMrKybwRiBWMBZL3KHONlrWDkeVAQPKECgYAGo/8SxoSOKWm0DHadY3TZ\\niWdnoDZXU9TYEb5YI4K+Guxulei9jPMDbx3wEyS8EmARpMz1Sm4fQcq1JbjB2gL3\\njdUFZ+s2CNgR1eTNcfq/sp/z9lULJ88T6JF/VnTrflS39bSKyk8Q885iaGqRA3o2\\nzqQCeWFYrPoyh1W6MEis4wKBgFOdd1LaoOfD9Lbvd2s7DkmJc9CVK++QJ6dUlVkH\\nPZG8uoRSPA9GMO+a5Lw+taNtc49xflS6M8wOqBimKYsilleRcxPQzxGfx9oAhL03\\nN/G8mip386qefycKQYw3CflYlQywdS4WhqEJCfNBPtQV/G/rr3Z1crMrT/vHbOlH\\n+gTBAoGAekJ5kbYjcKU85EqrfmPEo8DeKzjtETNj+e2xAQWCBiKFZkaNzMC6kIVJ\\nmeE0nJHdVS1TPDDcMK0t6urXD4g3W5yfxsyTachU2vfpe65y7Zl6sFayg44pLvy9\\nod1k3JBJbh24HEzTfzayTR7dazub8Oe6Nr44i0+y7cH6ivuu71M=\\n-----END RSA PRIVATE KEY-----\\n', u'become': True, u'forks': 8, u'playbook': u'/usr/share/ceph-ansible/infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml'}']\n Unexpected error while running command.\nCommand: ansible-playbook /usr/share/ceph-ansible/infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml --user tripleo-admin --become --become-user root --forks 8 --inventory-file /tmp/ansible-mistral-actioneqXCDD/inventory.yaml --private-key /tmp/ansible-mistral-actioneqXCDD/ssh_private_key --skip-tags package-install,with_pkg\nExit code: 2\nStdout: u'\\nPLAY [confirm whether user really meant to switch from non-containerized to containerized ceph daemons] ***\\n\\nTASK [exit playbook, if user did not mean to switch from non-containerized to containerized daemons?] ***\\nfatal: [localhost]: FAILED! => {\"changed\": false, \"msg\": \"\\\\\"Exiting switch-from-non-containerized-to-containerized-ceph-daemons.yml playbook,\\\\n cluster did not switch from non-containerized to containerized ceph daemons.\\\\n To switch from non-containerized to containerized ceph daemons, either say \\'yes\\' on the prompt or\\\\n or use `-e ireallymeanit=yes` on the command line when\\\\n invoking the playbook\\\\\"\\\\n\"}\\n\\nPLAY RECAP *********************************************************************\\nlocalhost                  : ok=0    changed=0    unreachable=0    failed=1   \\n\\n'\nStderr: u\"[DEPRECATION WARNING]: The use of 'include' for tasks has been deprecated. Use \\n'import_tasks' for static inclusions or 'include_tasks' for dynamic inclusions.\\n This feature will be removed in a future release. Deprecation warnings can be \\ndisabled by setting deprecation_warnings=False in ansible.cfg.\\n[DEPRECATION WARNING]: include is kept for backwards compatibility but usage is\\n discouraged. The module documentation details page may explain more about this\\n rationale.. This feature will be removed in a future release. Deprecation \\nwarnings can be disabled by setting deprecation_warnings=False in ansible.cfg.\\n[DEPRECATION WARNING]: The use of 'static' has been deprecated. Use \\n'import_tasks' for static inclusion, or 'include_tasks' for dynamic inclusion. \\nThis feature will be removed in a future release. Deprecation warnings can be \\ndisabled by setting deprecation_warnings=False in ansible.cfg.\\n[DEPRECATION WARNING]: docker is kept for backwards compatibility but usage is \\ndiscouraged. The module documentation details page may explain more about this \\nrationale.. This feature will be removed in a future release. Deprecation \\nwarnings can be disabled by setting deprecation_warnings=False in ansible.cfg.\\n [WARNING]: Not prompting as we are not in interactive mode\\n\"\n",

Comment 1 david.costakos 2018-02-16 23:11:38 UTC
NOTE: an ugly workaround is to make this change:

diff /usr/share/ceph-ansible/infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml /usr/share/ceph-ansible/infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml.orig
14c14
<       default: 'yes'
---
>       default: 'no'

then rerun upgrade.

Comment 2 Giulio Fidente 2018-02-19 09:13:58 UTC

*** This bug has been marked as a duplicate of bug 1538783 ***