Bug 616002 - NPE in server when uninventorying a platform for which the agent is down
Summary: NPE in server when uninventorying a platform for which the agent is down
Status: CLOSED CURRENTRELEASE
Alias: None
Product: RHQ Project
Classification: Other
Component: Core Server
Version: 3.0.0
Hardware: All
OS: Linux
low
medium vote
Target Milestone: ---
: ---
Assignee: Charles Crouch
QA Contact: Mike Foley
URL:
Whiteboard:
Keywords:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-07-19 13:15 UTC by Heiko W. Rupp
Modified: 2015-02-01 23:26 UTC (History)
2 users (show)

(edit)
Clone Of:
(edit)
Last Closed: 2014-04-04 17:23:13 UTC


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Bugzilla 616149 None None None Never

Description Heiko W. Rupp 2010-07-19 13:15:32 UTC
See NPE message at the end of this:


2010-07-19 09:14:07,282 WARN  [org.rhq.enterprise.server.core.comm.ServerCommunicationsService] {Failed to truncate/delete spool for deleted agent [Agent[id=10101,name=spawn_29152,address=10.16.90.3,port=29152,remote-endpoint=socket://10.16.90.3:29152/?rhq.communications.connector.rhqtype=agent&numAcceptThreads=1&maxPoolSize=303&clientMaxPoolSize=304&socketTimeout=60000&enableTcpNoDelay=true&backlog=200,last-availability-report=1279540508539]] please manually remove the file: null}!!! missing resource message key=[Failed to truncate/delete spool for deleted agent [Agent[id=10101,name=spawn_29152,address=10.16.90.3,port=29152,remote-endpoint=socket://10.16.90.3:29152/?rhq.communications.connector.rhqtype=agent&numAcceptThreads=1&maxPoolSize=303&clientMaxPoolSize=304&socketTimeout=60000&enableTcpNoDelay=true&backlog=200,last-availability-report=1279540508539]] please manually remove the file: null] args=[java.lang.NullPointerException]

Comment 1 Charles Crouch 2010-07-19 18:48:32 UTC
Heiko, this is cosmetic presumably? The platform still gets deleted?

Comment 2 Heiko W. Rupp 2010-07-20 07:50:42 UTC
Yes.
I still want to understand in RHQ4 what is going on.

Comment 6 Jay Shaughnessy 2014-04-04 17:23:13 UTC
A lot of work done in this area. I'm not aware of this happening recently. Closing.


Note You need to log in before you can comment on or make changes to this bug.