Description of problem: Integer errors are being raised on putting large numerical values (>2147483647) as values for numerical id attributes: e.g.: organization-id, user-id, environment-id, etc. Version-Release number of selected component (if applicable): How reproducible: every time: # working example with max integer value: https://<fqdn>/katello/api/v2/gpg_keys?organization_id=19&search=2147483647 {"total":2,"subtotal":0,"page":1,"per_page":20,"search":"2147483647","sort":{"by":null,"order":null},"results":[]} # non working example with max_integer+1 value: https://<fqdn>/katello/api/v2/gpg_keys?organization_id=19&search=2147483648 {"displayMessage":"PGError: ERROR: value \"2147483648\" is out of range for type integer\nLINE 1: ...83648%' OR \"katello_gpg_keys\".\"organization_id\" = '214748364...\n ^\n: SELECT COUNT(*) FROM \"katello_gpg_keys\" WHERE \"katello_gpg_keys\".\"id\" IN (SELECT DISTINCT \"katello_gpg_keys\".\"id\" FROM \"katello_gpg_keys\" WHERE \"katello_gpg_keys\".\"organization_id\" = 19) AND ((\"katello_gpg_keys\".\"name\" ILIKE '%2147483648%' OR \"katello_gpg_keys\".\"organization_id\" = '2147483648'))","errors":["PGError: ERROR: value \"2147483648\" is out of range for type integer\nLINE 1: ...83648%' OR \"katello_gpg_keys\".\"organization_id\" = '214748364...\n ^\n: SELECT COUNT(*) FROM \"katello_gpg_keys\" WHERE \"katello_gpg_keys\".\"id\" IN (SELECT DISTINCT \"katello_gpg_keys\".\"id\" FROM \"katello_gpg_keys\" WHERE \"katello_gpg_keys\".\"organization_id\" = 19) AND ((\"katello_gpg_keys\".\"name\" ILIKE '%2147483648%' OR \"katello_gpg_keys\".\"organization_id\" = '2147483648'))"]} Steps to Reproduce: 1. take any attribute referable via integer 2. place a request to retrieve such entity (see example above) 3. get the http 500 status with DB error. Actual results: HTTP 500 received with DB error Expected results: Input validation should take care of the posted values the correct way. Note the issue is very visible in the UI (search field). In some cases the error page is received, but in most of the cases the search is being executed in the form of an AJAX call, so it fails silently. related bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1192356 Additional info:
Of course, as a side effect, that issue can be reproduced from WebUI and can be pretty critical as we have impact on a lot of different entities. And, it fails badly like on attached screenshot
Created attachment 1061139 [details] Screenshot with issue on UI
(In reply to Oleksandr Shtaier from comment #1) > Of course, as a side effect, that issue can be reproduced from WebUI and can > be pretty critical as we have impact on a lot of different entities. And, it > fails badly like on attached screenshot Workaround for UI issue is to use the attribute name. The following works for me: name = 12345678901234567890123456789012345678901234567890123456789012345678901234567890
exact failure scenario in UI: 1) Use a numeric attribute name >= 2147483648 2) from UI search for this value without providing its name `name =`
What is wrong here is that failed SQL is returned in the error message as it might reveal some details which should be left hidden - but I do not have any proof here - this is IMO just a generic rule. It should be logged only.
@Jan_Hutar - Comment #6: I don't think 'hiding the message' is the answer. The error leaves UI in the error state - e.g. infinite `loading spinner`, etc. I believe the true fix would involve validating the input before it gets to the DB query (something that would simply leaves out looking for integer-based id's in case the value exceeds integer boundaries)
(In reply to Roman Plevka from comment #7) > @Jan_Hutar - Comment #6: > I don't think 'hiding the message' is the answer. Yep, that was not meant as a fix, just as something which shold happen anyway.
Upstream bug component is Discovery Plugin
Moving to POST since upstream bug http://projects.theforeman.org/issues/9088 has been closed
FAILED_QA sat6.3 snap3 # curl -ku admin:changeme "https://$(hostname)/api/locations?search=999999999999999999999999999999" { "error": {"message":"PG::Error: ERROR: value \"999999999999999999999999999999\" is out of range for type integer\nLINE 1: ...999999999999999999999999%' OR \"taxonomies\".\"id\" = '999999999...\n ^\n: SELECT \"taxonomies\".* FROM \"taxonomies\" WHERE \"taxonomies\".\"type\" IN ('Location') AND ((\"taxonomies\".\"title\" ILIKE '%999999999999999999999999999999%' OR \"taxonomies\".\"name\" ILIKE '%999999999999999999999999999999%' OR \"taxonomies\".\"id\" = '999999999999999999999999999999' OR \"taxonomies\".\"label\" ILIKE '%999999999999999999999999999999%')) ORDER BY \"taxonomies\".\"title\" ASC NULLS FIRST LIMIT 20 OFFSET 0"} } please note, that the purpose of this bugzilla was to point at faulty (or missing) input validation for large numbers *in general*, not just the gpg_keys endpoint (that one has been picked as an example).
This bug has an upstream issue. When this issue is resolved, it will be included in the next Satellite release. We will no longer be tracking this downstream. If you feel this should not have been closed, please feel free to re-open with additional details.