Description of problem: When using the bulk systems fetching: /candlepin/consumers/?uuid=cec277b7-bf62-4f81-806f-91ccddbf1383&uuid=cb6b7ebb-7696-4bf3-97fa-aa697f35887c&uuid=5758cb8f-8404-410c-890f-405bf9312f26&uuid=e3bdea6b-c630-4380-a695-7b7c93cb9165&uuid=c762ef86-1f8e-493c-baf7-957831d13507&uuid=1da8d9b3-7509-48af-b5f4-5ce006974a71&uuid=685c8846-1d05-4a22-b004-84affa483e1a&uuid=fd5dc468-df1f-4cd9-8cfe-85181169daf2&uuid=fd74b2f1-8782-4923-9560-63b35720cac6 When using the bulk api Candlepin seems to take around 3.5 seconds on my system no matter if i specify 1, 25, 100, or 180 ids. Fetching a single system by itself: /candlepin/consumers/cec277b7-bf62-4f81-806f-91ccddbf1383 only takes 0.8 seconds in my testing. So for about 25 systems, its actually one second faster to fetch them all individually than in bulk. Version-Release number of selected component (if applicable): 0.8.13-1.el6.noarch How reproducible: Always Steps to Reproduce: 1. Bulk fetch ~25 systems Actual results: Slower than fetching them all individually Expected results: Should be faster than fetching all individually
On my test system I had about ~230 consumers.
Running the above scenarios against candlepin using the rspec infrastructure: The same label as the above: 0.8.13-1.el6.noarch 25 consumers: All at once 0.437865 sec One at a time 1.495444 sec 50 consumers: All at once 1.239333 sec One at a time 3.026335 sec 100 consumers: All at once 2.816281 sec One at a time 6.005445 sec Against current master branch: 25 consumers: All at once 0.397801 sec One at a time 1.561900 sec 50 consumers: All at once 0.731055 sec One at a time 2.960606 sec 100 consumers: All at once 1.405288 sec One at a time 6.026481 sec 2 conclusions: This is not happening on the candlepin side. The recent work has led to better response times for batch sets of data.
Closing out the ol bugs. Please re-open if still an issue.