Description of problem: Overcloud deployment failing in Step2 with: 2018-08-03 15:03:35.898 1544 ERROR oslo_messaging.rpc.server DBError: (pymysql.err.InternalError) (1118, u'The size of BLOB/TEXT data inserted in one transaction is greater than 10% of redo log size. Increase the redo log size using innodb_log_file_size.') [SQL: u'UPDATE action_executions_v2 SET updated_at=%(updated_at)s, state=%(state)s, accepted=%(accepted)s, output=%(output)s WHERE action_executions_v2.id = %(action_executions_v2_id)s'] [parameters: {'output': '{"result": {"log_path": "/tmp/ansible-mistral-actionIMP4Az/ansible.log", "stderr": "ansible-playbook 2.4.3.0\\n config file = /usr/share/ceph-ansibl ... (11486221 characters truncated) ... statements should not include jinja2 templating delimiters\\nsuch as {{ }} or {% %}. Found: {{ groups.get(mgr_group_name, []) | length > 0\\n}}\\n"}}', 'state': 'SUCCESS', 'accepted': 1, 'updated_at': datetime.datetime(2018, 8, 3, 19, 3, 35), 'action_executions_v2_id': u'f9170825-5b9c-4d41-b528-4b029a78796b'}] (Background on this error at: http://sqlalche.me/e/2j85) Looking in /etc/my.cnf there's no settings in there so it is defaulting to 50M: MariaDB [(none)]> show variables where Variable_name like “innodb_log_file_size”; +----------------------+----------+ | Variable_name | Value | +----------------------+----------+ | innodb_log_file_size | 50331648 | +----------------------+----------+ 1 row in set (0.00 sec) We added a setting to the [mysqld] section increasing the size to 1GB and the deployment was able to pass that step. We should probably increase/set this as part of the Director deployment. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Deployment eventually times out in Step2. Error is located in the mistral log in the undercloud. Expected results: Deployment proceeds past Step2. Additional info:
Sounds like we need to loop in the mistral folks. Storing that quantity of logs in a DB doesn't sound like a great approach.
We just hit this issue , I've setted innodb_log_file_size=149946368 : MariaDB [(none)]> show variables like 'innodb_%size%'; +----------------------------------+-----------+ | Variable_name | Value | +----------------------------------+-----------+ | innodb_additional_mem_pool_size | 8388608 | | innodb_buffer_pool_size | 134217728 | | innodb_change_buffer_max_size | 25 | | innodb_ft_cache_size | 8000000 | | innodb_ft_max_token_size | 84 | | innodb_ft_min_token_size | 3 | | innodb_ft_total_cache_size | 640000000 | | innodb_log_block_size | 512 | | innodb_log_buffer_size | 16777216 | | innodb_log_file_size | 149946368 | | innodb_max_bitmap_file_size | 104857600 | | innodb_online_alter_log_max_size | 134217728 | | innodb_page_size | 16384 | | innodb_purge_batch_size | 300 | | innodb_sort_buffer_size | 1048576 | | innodb_sync_array_size | 1 | +----------------------------------+-----------+ 16 rows in set (0.00 sec) and I'm retrying a deployment and I'll update this back after.
Looks like we can override this through custom hiera data in undercloud.conf with: tripleo::profile::base::database::mysql::innodb_log_file_size: '256M'
*** Bug 1680532 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:1738