Hide Forgot
Description of problem: time hammer content-host list --organization <orgname> Error: Request Timeout real 2m2.340s user 0m1.179s sys 0m0.220s Version-Release number of selected component (if applicable): satellite 6.1.4 How reproducible: unknown (in this environment here with >500 content hosts the issue always exists) Steps to Reproduce: 1. register more than 500 content hosts 2. hammer content-host list --organization <orgname> 3. Actual results: Error: Request Timeout Expected results: <result should be displayed> Additional info: - using ":request_timeout: -1", the result appears - enforcing paging, the result also is displayed. I.e. "hammer content-host list --organization <orgname> --per-page 300". Using "--per-page 500", the timeout is hit again. - The time for execution suggests that the code contains a bug. From what I understand, this should be a simple database query. Instead it takes a very long time: -------- [root@sat6 ~]# time hammer content-host list --organization orgname Error: Request Timeout real 2m2.340s user 0m1.179s sys 0m0.220s [root@sat6 ~]# time hammer content-host list --organization orgname --per-page 300 ... real 0m49.191s user 0m5.763s sys 0m0.240s [root@sat6 ~]# time hammer content-host list --organization orgname --per-page 400 ... real 1m8.274s user 0m9.259s sys 0m0.250s [root@sat6 ~]# time hammer content-host list --organization orgname --per-page 450 Error: Request Timeout real 2m1.703s user 0m1.100s sys 0m0.105s [root@sat6 ~]# time hammer content-host list --organization orgname --per-page 500 Error: Request Timeout real 2m1.365s user 0m1.040s sys 0m0.087s [root@sat6 ~]# time hammer content-host list --organization orgname --search "be*" (this should yield 470 machines) Error: Request Timeout real 2m1.184s user 0m1.021s sys 0m0.108s [root@sat6 ~]# time hammer content-host list --organization orgname --search "a*" (this should yield about 25 machines) ... real 0m4.573s user 0m1.168s sys 0m0.137s --------
I suspect issues in the code, a quick fix would be if hammer would atleast use paging by itself. I.e. paging to 400 results by default.
Moving 6.2 bugs out to sat-backlog.
Created redmine issue http://projects.theforeman.org/issues/16010 from this bug
Upstream bug component is WebUI
How is this WebUI when it deals with Hammer/API?
It seems like rendering the view takes the majority of the time: Completed 200 OK in 68556ms (Views: 52481.4ms | ActiveRecord: 14506.9ms)
Performance improvement can be gained by including the proper tables (N+1 queries) at the beginning, rather than requesting more data from the DB in the view layer: Before: 2016-08-09T09:28:22 ef4eff65 [app] [I] Completed 200 OK in 30798ms (Views: 28758.6ms | ActiveRecord: 1144.1ms) 2016-08-09T09:30:29 ef4eff65 [app] [I] Completed 200 OK in 29078ms (Views: 27504.6ms | ActiveRecord: 1114.1ms) 2016-08-09T09:31:04 ef4eff65 [app] [I] Completed 200 OK in 29546ms (Views: 27802.8ms | ActiveRecord: 1237.9ms) Average: 29807.3ms After: 2016-08-09T09:24:13 ef4eff65 [app] [I] Completed 200 OK in 24099ms (Views: 22732.9ms | ActiveRecord: 705.7ms) 2016-08-09T09:25:48 ef4eff65 [app] [I] Completed 200 OK in 22791ms (Views: 21248.9ms | ActiveRecord: 621.4ms) 2016-08-09T09:26:14 ef4eff65 [app] [I] Completed 200 OK in 21944ms (Views: 20642.8ms | ActiveRecord: 683.7ms) Average: 22944.6 That's 23% faster.
Andrew, upstream bug is set to WebUI, so that was copied to this bug.
Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/16010 has been resolved.
Bryan, it looks like the release for the Foreman issue (http://projects.theforeman.org/issues/16646) is set to 1.14.
Setting as triaged as there's a fix already upstream
Please add verifications steps for this bug to help QE verify
Verification could be to look for N+1 warnings in the production.log while requesting the hosts page before applying. The warnings should not show up after applying the fix, and you should be able to notice a speed improvement in the response for a Satellite with hundreds of hosts.
Cherrypicks provided in https://bugzilla.redhat.com/show_bug.cgi?id=1419667 *** This bug has been marked as a duplicate of bug 1419667 ***