+++ This bug was initially created as a clone of Bug #1017961 +++ Description of problem: After restarting a monitored JVM, Apache Flume in this case, the availability of some (not all) MBeans appears down. Availability checks do not fix themselves and this error persists. Restarting the agent fixes the issue. Also, oddly enough, in this state metrics do appear. 2013-10-10 20:03:12,686 DEBUG [InventoryManager.availability-1] (rhq.core.pc.inventory.AvailabilityExecutor)- Failed to collect availability on Resource[id=167303, uuid=c8c5347f-1b98-4cf9-8099-419eca2f5414, type={JMX}Threading, key=java. lang:type=Threading, name=Threading, parent=JMX Server (8013)] java.lang.RuntimeException: Availability check failed at org.rhq.core.pc.inventory.AvailabilityProxy.getAvailability(AvailabilityProxy.java:123) at org.rhq.core.pc.inventory.AvailabilityExecutor.checkInventory(AvailabilityExecutor.java:268) at org.rhq.core.pc.inventory.AvailabilityExecutor.checkInventory(AvailabilityExecutor.java:319) at org.rhq.core.pc.inventory.AvailabilityExecutor.checkInventory(AvailabilityExecutor.java:319) at org.rhq.core.pc.inventory.AvailabilityExecutor.call(AvailabilityExecutor.java:131) at org.rhq.core.pc.inventory.AvailabilityExecutor.run(AvailabilityExecutor.java:80) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) Caused by: java.lang.reflect.UndeclaredThrowableException at $Proxy51.isRegistered(Unknown Source) at org.mc4j.ems.impl.jmx.connection.bean.DMBean.isRegistered(DMBean.java:188) at org.rhq.plugins.jmx.MBeanResourceComponent.isMBeanAvailable(MBeanResourceComponent.java:240) at org.rhq.plugins.jmx.MBeanResourceComponent.getAvailability(MBeanResourceComponent.java:227) at org.rhq.core.pc.inventory.AvailabilityProxy.call(AvailabilityProxy.java:72) at org.rhq.core.pc.inventory.AvailabilityProxy.call(AvailabilityProxy.java:22) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) ... 3 more Caused by: java.rmi.ConnectException: Connection refused to host: 17.172.9.135; nested exception is: java.net.ConnectException: Connection refused at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:601) at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:198) at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184) at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:110) at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source) at javax.management.remote.rmi.RMIConnectionImpl_Stub.isRegistered(Unknown Source) at javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.isRegistered(RMIConnector.java:839) at sun.reflect.GeneratedMethodAccessor9555.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.mc4j.ems.impl.jmx.connection.support.providers.proxy.JMXRemotingMBeanServerProxy.invoke(JMXRemotingMBeanServerProxy.java:59) ... 11 more Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:525) at java.net.Socket.connect(Socket.java:475) at java.net.Socket.<init>(Socket.java:372) at java.net.Socket.<init>(Socket.java:186) at sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:22) at sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(RMIMasterSocketFactory.java:128) at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:595) ... 21 more Version-Release number of selected component (if applicable): 4.5.1 How reproducible: Sometimes Steps to Reproduce: 1. Monitor a plain JVM with a large number of MBeans 2. Restart the JVM 3. Observe the state of resources is DOWN Analysis: It appears the exception is phony, as the host (port is not logged but should be correct). Could be a JVM bug. --- Additional comment from Elias Ross on 2013-11-04 00:56:17 CET --- --- Additional comment from Elias Ross on 2013-11-04 01:02:43 CET --- Patch is forthcoming, assuming it passes my testing. The fix is to simply request the Bean from from the MBeanServer every time you request availability and not try to cache it. There is another exception I've seen. This happens for pretty much any JMX server.
Issue reproduced with a Tomcat 7 server running on a Linux/OpenJDK6 machine. JVM version seems to be key (works well with Java 7).
Issue reproduce with a Tomcat 7 server running on a Linux/OpenJDK7 machine. So Java version is not important.
Merged in master see https://bugzilla.redhat.com/show_bug.cgi?id=1017961#c10
Moving to ON_QA as available to test with brew build of DR01: https://brewweb.devel.redhat.com//buildinfo?buildID=373993
Created attachment 928525 [details] JON3.2GA log
Created attachment 928526 [details] JON3.3DR1 log
I was not able reproduce the problem with just vanilla JMX/mbean app. Successfully reproduced and verified using #5 https://bugzilla.redhat.com/show_bug.cgi?id=1017961#c5