DescriptionPierre-Andre MOREY
2020-10-02 08:56:00 UTC
Description of problem:
Hi,
Customer saw very slow request from time to time on some nova API requests. It seemed to be in sync with recent newer k8s deployment on top of their OSP cloud.
After a lot of digging, they discover that this was due to very high transfer API request, it seems that if you have 128 properties, 128 metadata and 65kb of userdata, the way the join is done, can lead up to 1Gb of data transfer at one request.
More details on case 02763601.
Version-Release number of selected component (if applicable):
This is present on all the current release of OSP, having been tested on RHOSP13 in production, 16.1 and upstream release.
How reproducible:
Always
Steps to Reproduce:
1.add 128 properties to an image
2.add 128 metadata
3.Max out the user_data
Actual results:
1Gb of data transfer allowed, creating huge load and instability on the system ( here loadbalancers were stuck in Pending_update, worked around with more workers and CPU to the controllers, but that was without trying the max settings expressed there ).
Expected results:
Limited combinaison, adding some transfer cap.
Additional info:
Very critical for the customer, and possibly for all public cloud providers based on OSP
Customer has provide us an update regarding testing this patch:
~~~
We tested patch https://review.opendev.org/758928 and it fixes DB load when querying instance info. In out kubernetes test case DB bandwith usage went down 10 fold.
This would be nice to have integrated to 16.1 release at least.
~~~
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Red Hat OpenStack Platform 16.1.4 director bug fix advisory), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2021:0817