Bug 1111574
Summary: | content host API - json returned by index takes too long to process | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Satellite | Reporter: | Tom McKay <tomckay> | ||||
Component: | API | Assignee: | Justin Sherrill <jsherril> | ||||
Status: | CLOSED ERRATA | QA Contact: | Roman Plevka <rplevka> | ||||
Severity: | high | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | 6.0.3 | CC: | bbuckingham, bkearney, erinn.looneytriggs, jsherril, mmccune, rplevka, xdmoon | ||||
Target Milestone: | Unspecified | Keywords: | ReleaseNotes, Triaged | ||||
Target Release: | Unused | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
URL: | http://projects.theforeman.org/issues/6307 | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2016-07-27 08:42:02 UTC | Type: | --- | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1190823 | ||||||
Attachments: |
|
Description
Tom McKay
2014-06-20 12:08:05 UTC
Created from redmine issue http://projects.theforeman.org/issues/6307 Upstream bug component is API Note that the /systems api is being removed in favor of the /hosts api. The /hosts api does not return information from any backend service and should be much faster. That said 300-350 is a very arbitrary number and the whole point of pagination is to be able to fetch in some amount of chunks. This bz is arguing that 300 should be fast, but should 3000? 300000? What is the limit? I'll move to ON_QA to test that 300 can be fetched fairly quickly. VERIFIED I've run the following script to measure time of host listing as they were created (every 10th). The time for ~300 hosts is fair enough - ~15 seconds but the time seems to exponentially grow (see attached screenshot). Anyway, the hammer command does not time out even after 70 seconds. As Justin has pointed out, there's the pagination functionality which gives user control over how many records are being fetched by request. 80 +-+----+------+------+------+------+------+------+------+------+----+-+ + + + + + + + + + + AAA 70 +-+ "times.log" using 1:2 AAA+-+ | AA | 60 +-+ AAA +-+ | AAA | | AAAA | 50 +-+ A AA +-+ | AAAAA | 40 +-+ A A A +-+ | AAAAA A | 30 +-+ AAAA +-+ | AAAA | 20 +-+ AAAAAA +-+ | AAAAAA | | AAAAAA | 10 +-+ AAAAAAA +-+ +AAAAAAAAA + + + + + + + + + 0 +-+----+------+------+------+------+------+------+------+------+----+-+ 0 100 200 300 400 500 600 700 800 900 1000 hosts # Created attachment 1153492 [details]
listing time scatter polot
Verified with Sat 6.2.0 BETA Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1500 |