The following error is seen during deployment: DBError: (pymysql.err.InternalError) (1118, u'The size of BLOB/TEXT data inserted in one transaction is greater than 10% of redo log size. Increase the redo log size using innodb_log_file_size.') [SQL: u'UPDATE action_executions_v2 SET updated_at=%(updated_at)s, state=%(state)s, accepted=%(accepted)s, output=%(output)s WHERE action_executions_v2.id = %(action_executions_v2_id)s'] [parameters: {'output': '{"result": {"returncode": 0, "stderr": "", "stdout": "Using /var/lib/mistral/43fa4af8-89c8-4946-a9ae-17ce3a2f0b83/ansible.cfg as config file\\n[DEPRE ... (14957164 characters truncated) ... te-1 : ok=123 changed=38 unreachable=0 failed=0 \\nundercloud : ok=21 changed=10 unreachable=0 failed=0 \\n\\n"}}', 'state': 'SUCCESS', 'accepted': 1, 'updated_at': datetime.datetime(2018, 5, 3, 14, 39, 18), 'action_executions_v2_id': u'72955202-c0e6-4e5f-addf-c57fda732f64'}] (Background on this error at: http://sqlalche.me/e/2j85) This causes the openstack overcloud deployment command to hang at the end of the deployment. The issue here is with storing ansible logs into sql. A workaround for this is to bump this sql config: innodb_log_file_size=256M https://stackoverflow.com/questions/25277452/how-to-configure-mysql-5-6-longblob-for-large-binary-data I think the real solution should be to not store the ansible log output in sql.
*** This bug has been marked as a duplicate of bug 1573496 ***