Bug 1243317 - [STG] Get 503 error when connect the javascript console for java pod which has jolokia agent running if Master is out of SDN
Summary: [STG] Get 503 error when connect the javascript console for java pod which ha...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Online
Classification: Red Hat
Component: Management Console
Version: 3.x
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Wesley Hearn
QA Contact: Wei Sun
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-07-15 08:06 UTC by chunchen
Modified: 2016-09-30 02:16 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-09-08 20:14:50 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description chunchen 2015-07-15 08:06:31 UTC
Description of problem:
When create a java pod from image:fabric8/fabric8/quickstart-java-simple-mainclass:2.2.7, wait untill the pod and container were running,on webconsole connect into the java pod, could not show the JVM details.

Version-Release number of selected component (if applicable):
V3 STG (3.0 RC Build))

How reproducible:
always

Steps to Reproduce:
1. Create a java container
$ oc process -f http://repo1.maven.org/maven2/io/fabric8/jube/images/fabric8/quickstart-java-simple-mainclass/2.2.7/quickstart-java-simple-mainclass-2.2.7-kubernetes.json|oc create -f -

2. Wait untill the pod and container were running status,login webconsole ,select the right project and connect into the java pod.

Actual results:
On the javascript console nothing was showing.

Expected results:
Should show the  JVM trees and other java plugin details against V3 STG.

Additional info:
the same bug against OSE:
https://bugzilla.redhat.com/show_bug.cgi?id=1230483

Comment 1 Ben Parees 2015-07-22 20:02:08 UTC

*** This bug has been marked as a duplicate of bug 1230483 ***

Comment 2 Xiaoli Tian 2015-07-23 03:25:19 UTC
This is a bug for STG V3, to work around this issue, Ops team needs to configure the master as a node but not scheduleable. So that Ops could add this infor into release scripts or somewhere to make sure such issue will not happen again on the dedicated nodes in PROD later.

Comment 3 Ben Parees 2015-07-23 03:29:43 UTC
Ok, assigning to Wesley to handle from the ops side.

Comment 4 Wesley Hearn 2015-07-24 20:11:34 UTC
Ok, the master has been setup as a node in stage, it is currently a manual step until we can get it inside the ansible playbooks.

Comment 5 chunchen 2015-07-27 05:47:18 UTC
Still get 503, show below errors:

Error: 'dial tcp 10.1.3.239:8778: i/o timeout'
Trying to reach: 'http://10.1.3.239:8778/jolokia/?main-tab=openshiftConsole%2Fbrowse&sub-tab=openshiftConsole
%2Fbrowse-pods/&maxDepth=7&maxCollectionSize=500&ignoreErrors=true&canonicalNaming=false'

Comment 6 chunchen 2015-07-27 06:43:55 UTC
After re-tested it, the bug is fixed, so mark it as verified, please refer to the below messages:

<whearn> chunchen: WRT: bz#1243317, I think I have it fixed now, can you do one more quick test?
<chunchen> whearn, ok, try it now
<chunchen> whearn, i works now, i will verify the bug, thanks.
<chunchen> .time
<li-bot> chunchen: Error: "time" is not a valid command.
<whearn> 02:35 monday july 27th
<chunchen> whearn, thanks very much! ^^

Comment 7 Wesley Hearn 2015-07-27 06:50:29 UTC
Per comment #6, fixed.

Comment 8 chunchen 2015-07-27 06:52:23 UTC
According to comment #6, mark it as VERIFIED.


Note You need to log in before you can comment on or make changes to this bug.