The _encode_invalid_chars function in util/url.py in the urllib3 library 1.25.2 through 1.25.7 for Python allows a denial of service (CPU consumption) because of an inefficient algorithm. The percent_encodings array contains all matches of percent encodings. It is not deduplicated. For a URL of length N, the size of percent_encodings may be up to O(N). The next step (normalize existing percent-encoded bytes) also takes up to O(N) for each step, so the total time is O(N^2). If percent_encodings were deduplicated, the time to compute _encode_invalid_chars would be O(kN), where k is at most 484 ((10+6*2)^2). Reference and upstream commit: https://github.com/urllib3/urllib3/commit/4ab10abde715c7098e77686462b987586825d228
Created python-urllib3 tracking bugs for this issue: Affects: fedora-all [bug 1812102]
OpenShift Container Platform uses urllib3-1.21.1-1, which does not include the vulnerable _encode_invalid_chars function.
Note: flawed code was added in urllib3 1.25.2, Pull: https://github.com/urllib3/urllib3/pull/1586 Commit: https://github.com/urllib3/urllib3/commit/a74c9cfbaed9f811e7563cfc3dce894928e0221a
Statement: Red Hat Product Security does not consider this to be a vulnerability. The choice of an inefficient algorithm could cause a little more CPU time to be used than the alternative, however the difference in practice is not sufficient to cause a meaningful or even noticeable impact on the application.
External References: https://pypi.org/project/urllib3/1.25.8/ https://bugzilla.novell.com/show_bug.cgi?id=1166069