Description of problem: Server can't process inventory report with 5000 top-level resources (generated by perftest). Seems like the server is trying to to process the huge report in a single batch. Server and agent RAM: 16GB Agent jvm: -Xms64M -Xmx2048M Server jvm: -Xms1024M -Xmx1024M -XX:PermSize=256M -XX:MaxPermSize=256M Version-Release number of selected component (if applicable): 3.1.2 How reproducible: 100% Steps to Reproduce: 0. Increase various postgres settings (timeout, shared mem etc) 1. Install perftest plugin and start agent with -Drhq.perftest.scenario=configurable-1 -Drhq.perftest.server-a-count=5000 -Drhq.perftest.service-a-count=0
Created attachment 736327 [details] log files
Fixed in https://bugzilla.redhat.com/show_bug.cgi?id=906866
this was fixed as part of work by jay to break the data into smaller chunks.
For clarity, this issue was actually addressed in bug 905654. The fix was to break large inventory reports into smaller chunks when performing the transaction. This prevent the time out issue from occurring when dealing with large inventory reports.