Bug 1884539

Summary: [OSP16.1] Potential DOS over metadata*properties*userdata
Product: Red Hat OpenStack Reporter: Pierre-Andre MOREY <pmorey>
Component: openstack-novaAssignee: melanie witt <mwitt>
Status: CLOSED ERRATA QA Contact: James Parker <jparker>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 13.0 (Queens)CC: astupnik, dasmith, eglynn, ikke, jhakimra, jparker, kchamart, lyarwood, madgupta, mwitt, nweinber, sbauza, sgordon, slong, vromanso
Target Milestone: z4Keywords: Security, Triaged, ZStream
Target Release: 16.1 (Train on RHEL 8.2)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-nova-20.4.1-1.20200917173451.el8ost Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1893897 1893898 1893900 (view as bug list) Environment:
Last Closed: 2021-03-17 15:32:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1893898    
Bug Blocks: 1893900    

Description Pierre-Andre MOREY 2020-10-02 08:56:00 UTC
Description of problem:
Hi,

Customer saw very slow request from time to time on some nova API requests. It seemed to be in sync with recent newer k8s deployment on top of their OSP cloud.

After a lot of digging, they discover that this was due to very high transfer API request, it seems that if you have 128 properties, 128 metadata and 65kb of userdata, the way the join is done, can lead up to 1Gb of data transfer at one request.

More details on case 02763601.

Version-Release number of selected component (if applicable):
This is present on all the current release of OSP, having been tested on RHOSP13 in production, 16.1 and upstream release.


How reproducible:
Always

Steps to Reproduce:
1.add 128 properties to an image
2.add 128 metadata
3.Max out the user_data

Actual results:
1Gb of data transfer allowed, creating huge load and instability on the system ( here loadbalancers were stuck in Pending_update, worked around with more workers and CPU to the controllers, but that was without trying the max settings expressed there ).

Expected results:
Limited combinaison, adding some transfer cap.

Additional info:
Very critical for the customer, and possibly for all public cloud providers based on OSP

Comment 1 Alex Stupnikov 2020-10-02 09:04:02 UTC
*** Bug 1884535 has been marked as a duplicate of this bug. ***

Comment 21 Summer Long 2020-10-20 03:59:05 UTC
:) Yeah, that's the first comment in BZ terms. Have made public so that others can read. cheers, s

Comment 25 Madhur Gupta 2020-10-30 11:10:45 UTC
Customer has provide us an update regarding testing this patch:

~~~
We tested patch https://review.opendev.org/758928 and it fixes DB load when querying instance info. In out kubernetes test case DB bandwith usage went down 10 fold. 
This would be nice to have integrated to 16.1 release at least.
~~~

Comment 29 Summer Long 2020-11-03 01:09:15 UTC
Upstream is handling this as a hardening task and won't be issuing a CVE; RH will do the same. Thanks for raising the additional trackers, Melanie.

Comment 46 errata-xmlrpc 2021-03-17 15:32:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenStack Platform 16.1.4 director bug fix advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:0817