Bug 1270491 - Failed in connecting to JMX console from openshift webconsole (QE V3 cluster only)
Summary: Failed in connecting to JMX console from openshift webconsole (QE V3 cluster ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Online
Classification: Red Hat
Component: Management Console
Version: 3.x
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: stlewis@redhat.com
QA Contact: Yadan Pei
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-10-10 10:40 UTC by Xia Zhao
Modified: 2016-05-23 15:08 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-05-23 15:08:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Xia Zhao 2015-10-10 10:40:30 UTC
Description of problem:
Get error "The connection to jolokia has failed for an unknown reason, check the javascript console for more details." when attempt to connect to JMX console from browse pod page on openshift webconsole.

Version-Release number of selected component (if applicable):
oc v1.0.6-328-gdf1f19e
kubernetes v1.1.0-alpha.1-653-g86b4e77

Test Environment:
API: api.qe.openshift.com
Console: console.qe.openshift.com

How reproducible:
always

Steps to Reproduce:
1.Deploy java app by running below command, wait until pods are running:
oc create -f http://repo1.maven.org/maven2/io/fabric8/jube/images/fabric8/amq-broker/2.1.1/amq-broker-2.1.1-kubernetes.json`
2.Login web console
3.List pods on  "Browse" / "Pods" , choose the java app pod and click on the 'connect'  button

Actual results:
Not able to connect to JMX console, get error "The connection to jolokia has failed for an unknown reason, check the javascript console for more details." on browser

Expected results:
Should be able to connect to JMX console and view java components there


Additional info:
Found these error in pod logs:

INFO: class: org.apache.deltaspike.core.impl.exclude.GlobalAlternative activated=true
Oct 10, 2015 8:41:38 AM org.apache.deltaspike.core.util.ProjectStageProducer initProjectStage
INFO: Computed the following DeltaSpike ProjectStage: Production
PListStore:[/activemq-data/localhost/tmp_storage] started
java.lang.RuntimeException: java.io.IOException: Failed to create directory 'activemq-data/localhost/KahaDB'
    at org.apache.activemq.store.kahadb.KahaDBStore.size(KahaDBStore.java:1090)
    at org.apache.activemq.store.kahadb.KahaDBPersistenceAdapter.size(KahaDBPersistenceAdapter.java:214)
    at org.apache.activemq.usage.StoreUsage.retrieveUsage(StoreUsage.java:51)
    at org.apache.activemq.usage.Usage.caclPercentUsage(Usage.java:279)
    at org.apache.activemq.usage.Usage.onLimitChange(Usage.java:184)
    at org.apache.activemq.usage.Usage.setLimit(Usage.java:168)
    at org.apache.activemq.broker.BrokerService.getSystemUsage(BrokerService.java:1104)
    at io.fabric8.amq.AMQBroker.doStart(AMQBroker.java:243)
    at org.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:55)
    at io.fabric8.amq.Main.main(Main.java:34)
Caused by: java.io.IOException: Failed to create directory 'activemq-data/localhost/KahaDB'
    at org.apache.activemq.util.IOHelper.mkdirs(IOHelper.java:316)
    at org.apache.activemq.store.kahadb.MessageDatabase.createPageFile(MessageDatabase.java:2467)
    at org.apache.activemq.store.kahadb.MessageDatabase.getPageFile(MessageDatabase.java:2594)
    at org.apache.activemq.store.kahadb.KahaDBStore.size(KahaDBStore.java:1088)
    ... 9 more
Failed to Start AMQ_Broker
java.lang.RuntimeException: java.io.IOException: Failed to create directory 'activemq-data/localhost/KahaDB'
    at org.apache.activemq.store.kahadb.KahaDBStore.size(KahaDBStore.java:1090)[activemq-kahadb-store-5.11.1.jar:5.11.1]
    at org.apache.activemq.store.kahadb.KahaDBPersistenceAdapter.size(KahaDBPersistenceAdapter.java:214)[activemq-kahadb-store-5.11.1.jar:5.11.1]
    at org.apache.activemq.usage.StoreUsage.retrieveUsage(StoreUsage.java:51)[activemq-broker-5.11.1.jar:5.11.1]
    at org.apache.activemq.usage.Usage.caclPercentUsage(Usage.java:279)[activemq-client-5.11.1.jar:5.11.1]
    at org.apache.activemq.usage.Usage.onLimitChange(Usage.java:184)[activemq-client-5.11.1.jar:5.11.1]
    at org.apache.activemq.usage.Usage.setLimit(Usage.java:168)[activemq-client-5.11.1.jar:5.11.1]
    at org.apache.activemq.broker.BrokerService.getSystemUsage(BrokerService.java:1104)[activemq-broker-5.11.1.jar:5.11.1]
    at io.fabric8.amq.AMQBroker.doStart(AMQBroker.java:243)[amq-broker-2.1.1.jar:]
    at org.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:55)[activemq-client-5.11.1.jar:5.11.1]
    at io.fabric8.amq.Main.main(Main.java:34)[amq-broker-2.1.1.jar:]

Comment 2 Jessica Forrester 2015-10-11 20:05:47 UTC
Stan can you take a look?

Comment 3 chunchen 2015-10-12 07:15:21 UTC
Hi,Stan
  We met similar issue against OSE env before, the root cause was that the master is not a node. Please refer to the following old bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1230483

Comment 4 stlewis@redhat.com 2015-10-12 13:36:38 UTC
@chunchen, think this failure to connect is a different reason, failure to create it's database is a fatal error for the ActiveMQ broker, I would expect that it actually exited.

@Xia Zhao, any additional information in the javascript console?  Typically if the proxy fails to connect to the backend service you'd see a 503 which does get reported in the dialog.

Also, where did you find this particular image for the broker?  It's from last May!  I doubt it will work to be honest, try this one which was released a couple days ago:

http://search.maven.org/remotecontent?filepath=io/fabric8/ipaas/apps/amqbroker/2.2.46/amqbroker-2.2.46-kubernetes.json

This is the current source repo for these apps maintained by the fabric8 team, there's a few old artifacts in maven central now since there's been a lot of reorganization lately:

https://github.com/fabric8io/fabric8-ipaas

Comment 5 Xia Zhao 2015-10-13 03:12:36 UTC
Hi Stan,

Thank you for pointing out that the previous amq broker json file I used is out-of-date, I'll pay attention to this next time. 

On QE V3 cluster I replaced the test with this file:

http://search.maven.org/remotecontent?filepath=io/fabric8/ipaas/apps/amqbroker/2.2.46/amqbroker-2.2.46-kubernetes.json

and found the issue still repro. I didn't see any addtional info from the on-screen error message window and I disabled proxy for my browser during the test, the javascript console told:

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://api.qe.openshift.com/api/v1/namespaces/p9/pods/amqbroker-ttme1:8778/proxy/jolokia/?maxDepth=7&maxCollectionSize=500&ignoreErrors=true&canonicalNaming=false. (Reason: missing token 'x-authorization' in CORS header 'Access-Control-Allow-Headers' from CORS preflight channel). <unknown>
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://api.qe.openshift.com/api/v1/namespaces/p9/pods/amqbroker-ttme1:8778/proxy/jolokia/?maxDepth=7&maxCollectionSize=500&ignoreErrors=true&canonicalNaming=false. (Reason: CORS request failed). <unknown>

The pod log still show the same error about "Failed to create directory 'activemq-data/localhost/KahaDB' ".

I also tested the same scenario on AWS instance(devenv-fedora_2444), and find the pod deployed by amqbroker-2.2.46-kubernetes.json can not be accessed, it had worked previously with amq-broker-2.1.1-kubernetes.json. 

More info about the failure on devenv-fedora_2444 with amqbroker-2.2.46-kubernetes.json:
1. Get on screen error with more info this time: "Error: 'dial tcp 172.17.0.3:8778: connection refused'
Trying to reach: 'http://172.17.0.3:8778/jolokia/?maxDepth=7&maxCollectionSize=500&ignoreErrors=true&canonicalNaming=false'"
2. Error in js console shows error code 503:
POST XHR https://ec2-52-23-153-64.compute-1.amazonaws.com:8443/api/v1/namespaces/p1/pods/amqbroker-d7y4p:8778/proxy/jolokia/ [HTTP/1.1 503 Service Unavailable 423ms]
3. The pod is running on web console, but the status is incorrect and can't get any logs by oc command line:
oc get po
NAME              READY     STATUS             RESTARTS   AGE
amqbroker-d7y4p   0/1       CrashLoopBackOff   1          15m
oc logs amqbroker-d7y4p
Pod "amqbroker-d7y4p" in namespace "p1": container "amqbroker" is in waiting state.


Please let me know if any other information is needed from my side.

Thanks,
Xia

Comment 6 stlewis@redhat.com 2015-10-13 12:51:39 UTC
(In reply to Xia Zhao from comment #5)
> Hi Stan,
> 
> Thank you for pointing out that the previous amq broker json file I used is
> out-of-date, I'll pay attention to this next time. 
> 
> On QE V3 cluster I replaced the test with this file:
> 
> http://search.maven.org/remotecontent?filepath=io/fabric8/ipaas/apps/
> amqbroker/2.2.46/amqbroker-2.2.46-kubernetes.json
> 
> and found the issue still repro. I didn't see any addtional info from the
> on-screen error message window and I disabled proxy for my browser during
> the test, the javascript console told:
> 
> Cross-Origin Request Blocked: The Same Origin Policy disallows reading the
> remote resource at
> https://api.qe.openshift.com/api/v1/namespaces/p9/pods/amqbroker-ttme1:8778/
> proxy/jolokia/
> ?maxDepth=7&maxCollectionSize=500&ignoreErrors=true&canonicalNaming=false.
> (Reason: missing token 'x-authorization' in CORS header
> 'Access-Control-Allow-Headers' from CORS preflight channel). <unknown>
> Cross-Origin Request Blocked: The Same Origin Policy disallows reading the
> remote resource at
> https://api.qe.openshift.com/api/v1/namespaces/p9/pods/amqbroker-ttme1:8778/
> proxy/jolokia/
> ?maxDepth=7&maxCollectionSize=500&ignoreErrors=true&canonicalNaming=false.
> (Reason: CORS request failed). <unknown>
> 

This shouldn't be present if that instance is running a build of master, maybe ensure you've cleared your browser's cache?  Just double-checked the origin repo and it shouldn't be sending that header.  That being said, I'm planning on pushing an update to the java console today.

> The pod log still show the same error about "Failed to create directory
> 'activemq-data/localhost/KahaDB' ".
> 
> I also tested the same scenario on AWS instance(devenv-fedora_2444), and
> find the pod deployed by amqbroker-2.2.46-kubernetes.json can not be
> accessed, it had worked previously with amq-broker-2.1.1-kubernetes.json. 
> 
> More info about the failure on devenv-fedora_2444 with
> amqbroker-2.2.46-kubernetes.json:
> 1. Get on screen error with more info this time: "Error: 'dial tcp
> 172.17.0.3:8778: connection refused'
> Trying to reach:
> 'http://172.17.0.3:8778/jolokia/
> ?maxDepth=7&maxCollectionSize=500&ignoreErrors=true&canonicalNaming=false'"
> 2. Error in js console shows error code 503:
> POST XHR
> https://ec2-52-23-153-64.compute-1.amazonaws.com:8443/api/v1/namespaces/p1/
> pods/amqbroker-d7y4p:8778/proxy/jolokia/ [HTTP/1.1 503 Service Unavailable
> 423ms]
> 3. The pod is running on web console, but the status is incorrect and can't
> get any logs by oc command line:
> oc get po
> NAME              READY     STATUS             RESTARTS   AGE
> amqbroker-d7y4p   0/1       CrashLoopBackOff   1          15m
> oc logs amqbroker-d7y4p
> Pod "amqbroker-d7y4p" in namespace "p1": container "amqbroker" is in waiting
> state.
 
Yeah, unfortunately it's quite possible for a pod to be "Running" but for the JVM to not actually be running.  From the console AFAIK I only really have the pod state to go on.

> 
> Please let me know if any other information is needed from my side.
> 
> Thanks,
> Xia

Comment 7 stlewis@redhat.com 2016-03-18 20:05:20 UTC
Added a fix to hide the Java link if the container isn't ready, we were just checking the pod state previously, PR is here -> https://github.com/openshift/origin/pull/8148

Comment 8 stlewis@redhat.com 2016-03-22 13:23:47 UTC
Whoops, totally forgot to set this to the right state...

Comment 9 Xia Zhao 2016-03-23 02:10:38 UTC
Verified on dev_preview int online environment, the JMX and Threads console behaved OK. Closing as fixed. Thanks!


Note You need to log in before you can comment on or make changes to this bug.