Everything has been backported upstream
Hello. Is there a target milestone for this backport to land in OSP 13?
It depends on QE avaiability but besides that I don't see why not.
If this bug requires doc text for errata release, please set the 'Doc Type' and provide draft text according to the template in the 'Doc Text' field. The documentation team will review, edit, and approve the text. If this bug does not require doc text, please set the 'requires_doc_text' flag to -.
No regression issues found
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:3794
I just hit this again, deploying z11 update at converge step. ~~~ queue_post failed: Error response from Zaqar. Code: 400. Title: Invalid API request. Description: Message collection size is too large. Max size 1048576.\\n\"}, \"result\": \"Failure caused by error in tasks: send_message\\n\\n send_message [task_ex_id=2df6a3f6-0aa0-4b0f-9546-f8efb9c36d0b] -> Workflow failed due to message status\\n [wf_ex_id=5ca8156c-cc5c-4e74-b057-7971cc773f71, idx=0]: Workflow failed due to message status\\n\", \"deployment_status\": \"DEPLOY_FAILED\"}", "input": "{\"run_validations\": false, \"skip_deploy_identifier\": false, \"container\": \"overcloud\", \"queue_name\": \"tripleo\", \"timeout\": 480}", "created_at": "2020-03-24 22:43:39", "project_id": "6829dc9650ef411ca67cd1654d5774df", "id": "6e069679-612c-4bab-a7c6-00a84651c322"} ~~~ Director is at z11 and has: openstack-tripleo-common-8.7.1-12.el7ost.noarch python-tripleoclient-9.3.1-7.el7ost.noarch For now I'm applying my previous work around to increase the sizes even more and trying again.
Just to follow up, increasing the size further worked. ~~~ sudo crudini --set /etc/zaqar/zaqar.conf transport max_messages_post_size 2097152 sudo crudini --set /etc/zaqar/zaqar.conf oslo_messaging_kafka producer_batch_size 32768 sudo crudini --set /etc/mistral/mistral.conf engine execution_field_size_limit_kb 32768 sudo reboot ~~~
Seems like the issue is not fixed completely.(env RHOSP13z12) We have hit the issue during scale out of the compute. 'ZaqarAction.queue_post failed: Error response from Zaqar. Code: 400. Title: Invalid API request. Description: Message collection size is too large. Max size 1048576.: ActionException: ZaqarAction.queue_post failed: Error response f rom Zaqar. Code: 400. Title: Invalid API request. Description: Message collection size is too large. Max size 1048576. 68477 2020-07-29 15:19:26.303 2884 ERROR mistral.executors.default_executor Traceback (most recent call last): 68478 2020-07-29 15:19:26.303 2884 ERROR mistral.executors.default_executor File "/usr/lib/python2.7/site-packages/mistral/executors/default_executor.py", line 114, in run_action 68479 2020-07-29 15:19:26.303 2884 ERROR mistral.executors.default_executor result = action.run(action_ctx) 68480 2020-07-29 15:19:26.303 2884 ERROR mistral.executors.default_executor File "/usr/lib/python2.7/site-packages/mistral/actions/openstack/base.py", line 130, in run 68481 2020-07-29 15:19:26.303 2884 ERROR mistral.executors.default_executor (self.__class__.__name__, self.client_method_name, str(e)) 68482 2020-07-29 15:19:26.303 2884 ERROR mistral.executors.default_executor ActionException: ZaqarAction.queue_post failed: Error response from Zaqar. Code: 400. Title: Invalid API request. Description: Message collection size is too larg e. Max size 1048576.'