+++ This bug was initially created as a clone of Bug #1564452 +++ Description of problem: While trying to report hypervisor data using virt-who to satellite, the reporting suffered an error. The following data was being reported: 300 Hosts and 19K VMs running over VMWare ESXi. Virt-who reported the following in debug mode: 2018-04-06 06:11:17,790 [rhsm.connection DEBUG] MainProcess(1860):Thread-4 @connection.py:_request:516 - Making request: POST /rhsm/hypervisors/Default_Organization?reporter_id=gprfc023.sbu.lab.eng.bos.redhat.com-e542c5cdcd5a42b9afddad8b4529e928&cloaked=False&env=Library 2018-04-06 06:11:29,395 [rhsm.connection INFO] MainProcess(1860):Thread-4 @connection.py:_request:552 - Response: status=200, request="POST /rhsm/hypervisors/Default_Organization?reporter_id=gprfc023.sbu.lab.eng.bos.redhat.com-e542c5cdcd5a42b9afddad8b4529e928&cloaked=False&env=Library" 2018-04-06 06:11:29,421 [virtwho.destination_8759358284855132892 DEBUG] MainProcess(1860):Thread-4 @subscriptionmanager.py:_connect:128 - Authenticating with RHSM username admin 2018-04-06 06:11:29,424 [rhsm.connection INFO] MainProcess(1860):Thread-4 @connection.py:__init__:822 - Connection built: host=gprfc018.sbu.lab.eng.bos.redhat.com port=443 handler=/rhsm auth=basic username=admin 2018-04-06 06:11:29,426 [rhsm.connection DEBUG] MainProcess(1860):Thread-4 @connection.py:_request:516 - Making request: GET /rhsm/status/ 2018-04-06 06:11:29,473 [rhsm.connection INFO] MainProcess(1860):Thread-4 @connection.py:_request:552 - Response: status=200, request="GET /rhsm/status/" 2018-04-06 06:11:29,474 [virtwho.destination_8759358284855132892 DEBUG] MainProcess(1860):Thread-4 @subscriptionmanager.py:check_report_state:263 - Checking status of job hypervisor_update_e722e400-a203-495b-a9c5-9c423e8b0ed1 2018-04-06 06:11:29,474 [rhsm.connection DEBUG] MainProcess(1860):Thread-4 @connection.py:_request:516 - Making request: GET /rhsm/jobs/hypervisor_update_e722e400-a203-495b-a9c5-9c423e8b0ed1?result_data=True 2018-04-06 06:11:29,533 [rhsm.connection INFO] MainProcess(1860):Thread-4 @connection.py:_request:552 - Response: status=200, request="GET /rhsm/jobs/hypervisor_update_e722e400-a203-495b-a9c5-9c423e8b0ed1?result_data=True" 2018-04-06 06:11:29,533 [virtwho.destination_8759358284855132892 WARNING] MainProcess(1860):Thread-4 @subscriptionmanager.py:check_report_state:282 - Job status report without resultData: {u'finishTime': None, u'targetType': u'owner', u'updated': u'2018-04-06T10:11:29+0000', u'group': u'async group', u'statusPath': u'/jobs/hypervisor_update_e722e400-a203-495b-a9c5-9c423e8b0ed1', u'resultData': None, u'created': u'2018-04-06T10:11:29+0000', u'targetId': u'Default_Organization', u'principalName': u'foreman_admin', u'state': u'FAILED', u'done': True, u'result': u'javax.persistence.RollbackException: Error while committing the transaction', u'startTime': u'2018-04-06T10:11:29+0000', u'ownerId': u'Default_Organization', u'id': u'hypervisor_update_e722e400-a203-495b-a9c5-9c423e8b0ed1', u'correlationId': u'feb97bf4'} 2018-04-06 06:11:29,545 [virtwho.destination_8759358284855132892 DEBUG] MainProcess(1860):Thread-4 @virt.py:run:394 - Thread 'destination_8759358284855132892' terminated 2018-04-06 06:11:29,803 [virtwho.main DEBUG] MainProcess(1860):MainThread @__main__.py:main:23 - virt-who terminated 2018-04-06 06:11:29,803 [virtwho.main DEBUG] MainProcess(1860):MainThread @executor.py:terminate:226 - virt-who is shutting down On checking the task status, the following data was obtained from the API endpoint: { "id": "hypervisor_update_e722e400-a203-495b-a9c5-9c423e8b0ed1", "state": "FAILED", "startTime": "2018-04-06T10:11:29+0000", "finishTime": "2018-04-06T10:11:29+0000", "result": "javax.persistence.RollbackException: Error while committing the transaction", "principalName": "foreman_admin", "targetType": "owner", "targetId": "Default_Organization", "ownerId": "Default_Organization", "correlationId": "feb97bf4", "resultData": "[cloaked]", "statusPath": "/jobs/hypervisor_update_e722e400-a203-495b-a9c5-9c423e8b0ed1", "done": true, "group": "async group", "created": "2018-04-06T10:11:29+0000", "updated": "2018-04-06T10:11:32+0000" } Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: The virt-who reporting ends with an error in the reporting task Expected results: virt-who reporting ends successfully Additional info: --- Additional comment from on 2018-04-06 06:34 EDT --- --- Additional comment from Barnaby Court on 2018-04-06 11:52:49 EDT --- Not sure that this is actually scale related so much as sample data related. I'm cloning upstream given the strange exception in the candlepin log file.
candlepin exception that sticks out: 2018-04-06 06:07:13,270 [thread=QuartzScheduler_Worker-11] [job=hypervisor_update_6d7e2de0-1f51-4ddf-9653-1691272308a8, org=, csid=] ERROR org.candlepin.pinsetter.tasks.HypervisorUpdateJob - HypervisorUpdateJob encountered a problem. org.hibernate.exception.JDBCConnectionException: could not extract ResultSet at org.hibernate.exception.internal.SQLStateConversionDelegate.convert(SQLStateConversionDelegate.java:115) ~[hibernate-core-5.1.1.Final.jar:5.1.1.Final] at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:42) ~[hibernate-core-5.1.1.Final.jar:5.1.1.Final] at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:111) ~[hibernate-core-5.1.1.Final.jar:5.1.1.Final] at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:97) ~[hibernate-core-5.1.1.Final.jar:5.1.1.Final] at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.extract(ResultSetReturnImpl.java:79) ~[hibernate-core-5.1.1.Final.jar:5.1.1.Final] at org.hibernate.loader.Loader.getResultSet(Loader.java:2122) ~[hibernate-core-5.1.1.Final.jar:5.1.1.Final] at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1905) ~[hibernate-core-5.1.1.Final.jar:5.1.1.Final] at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1881) ~[hibernate-core-5.1.1.Final.jar:5.1.1.Final] at org.hibernate.loader.Loader.doQuery(Loader.java:925) ~[hibernate-core-5.1.1.Final.jar:5.1.1.Final] at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:342) ~[hibernate-core-5.1.1.Final.jar:5.1.1.Final] at org.hibernate.loader.Loader.doList(Loader.java:2622) ~[hibernate-core-5.1.1.Final.jar:5.1.1.Final] at org.hibernate.loader.Loader.doList(Loader.java:2605) ~[hibernate-core-5.1.1.Final.jar:5.1.1.Final] at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2434) ~[hibernate-core-5.1.1.Final.jar:5.1.1.Final] at org.hibernate.loader.Loader.list(Loader.java:2429) ~[hibernate-core-5.1.1.Final.jar:5.1.1.Final] at org.hibernate.loader.custom.CustomLoader.list(CustomLoader.java:336) ~[hibernate-core-5.1.1.Final.jar:5.1.1.Final] at org.hibernate.internal.SessionImpl.listCustomQuery(SessionImpl.java:1980) ~[hibernate-core-5.1.1.Final.jar:5.1.1.Final] at org.hibernate.internal.AbstractSessionImpl.list(AbstractSessionImpl.java:322) ~[hibernate-core-5.1.1.Final.jar:5.1.1.Final] at org.hibernate.internal.SQLQueryImpl.list(SQLQueryImpl.java:125) ~[hibernate-core-5.1.1.Final.jar:5.1.1.Final] at org.candlepin.model.ConsumerCurator.getGuestConsumersMap(ConsumerCurator.java:221) ~[ConsumerCurator.class:na] at com.google.inject.persist.jpa.JpaLocalTxnInterceptor.invoke(JpaLocalTxnInterceptor.java:58) ~[guice-persist-3.0.jar:na] at org.candlepin.pinsetter.tasks.HypervisorUpdateJob.toExecute(HypervisorUpdateJob.java:227) ~[HypervisorUpdateJob.class:na] at com.google.inject.persist.jpa.JpaLocalTxnInterceptor.invoke(JpaLocalTxnInterceptor.java:66) ~[guice-persist-3.0.jar:na] at org.candlepin.pinsetter.tasks.KingpinJob.execute(KingpinJob.java:91) [KingpinJob.class:na] at org.quartz.core.JobRunShell.run(JobRunShell.java:202) [quartz-2.2.1.jar:na] at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) [quartz-2.2.1.jar:na] Caused by: org.postgresql.util.PSQLException: An I/O error occured while sending to the backend. at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:283) ~[postgresql-9.0-801.jdbc4.jar:na] at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:500) ~[postgresql-9.0-801.jdbc4.jar:na] at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:388) ~[postgresql-9.0-801.jdbc4.jar:na] at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:273) ~[postgresql-9.0-801.jdbc4.jar:na] at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(NewProxyPreparedStatement.java:353) ~[c3p0-0.9.5.2.jar:0.9.5.2] at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.extract(ResultSetReturnImpl.java:70) ~[hibernate-core-5.1.1.Final.jar:5.1.1.Final] ... 20 common frames omitted Caused by: java.io.IOException: Tried to send an out-of-range integer as a 2-byte value: 32768 at org.postgresql.core.PGStream.SendInteger2(PGStream.java:201) ~[postgresql-9.0-801.jdbc4.jar:na] at org.postgresql.core.v3.QueryExecutorImpl.sendParse(QueryExecutorImpl.java:1236) ~[postgresql-9.0-801.jdbc4.jar:na] at org.postgresql.core.v3.QueryExecutorImpl.sendOneQuery(QueryExecutorImpl.java:1508) ~[postgresql-9.0-801.jdbc4.jar:na] at org.postgresql.core.v3.QueryExecutorImpl.sendQuery(QueryExecutorImpl.java:1097) ~[postgresql-9.0-801.jdbc4.jar:na] at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255) ~[postgresql-9.0-801.jdbc4.jar:na] ... 25 common frames omitted
Which version of virt-who was used?
(In reply to Kevin Howell from comment #2) > Which version of virt-who was used? The package version for virt-who is: virt-who-0.20.4-1.el7sat.noarch running on RHEL 7.4
*** This bug has been marked as a duplicate of bug 1694809 ***