Bug 969321 - threaddump for jbosseap/jbossas does not generate any log.
threaddump for jbosseap/jbossas does not generate any log.
Status: CLOSED CURRENTRELEASE
Product: OpenShift Online
Classification: Red Hat
Component: Containers (Show other bugs)
2.x
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Dan Mace
libra bugs
: Reopened
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-05-31 04:00 EDT by Johnny Liu
Modified: 2015-05-14 19:20 EDT (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-06-11 00:14:41 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Johnny Liu 2013-05-31 04:00:42 EDT
Description of problem:
threaddump for jbosseap/jbossas does not generate any log.

Version-Release number of selected component (if applicable):
devenv_3296

How reproducible:
Always

Steps to Reproduce:
1.Create a jbosseap app
2.Open a new terminal, log into this app to monitor server.log using tail command
3.Run threddump command to this app
# rhc threaddump jbosseaptest

Actual results:
In the terminal opened in step 2, there is no any newly generated log

Expected results:
There should be some new core dump log generated.

Additional info:
This issue also happened with jbosseap/jbossas
Comment 1 Dan Mace 2013-05-31 07:25:27 EDT
Thread dumps generated with a SIGQUIT are logged to:

(v1) ~/jbosseap-6.0/jbosseap-6.0/tmp/jbosseap-6.0.log
(v1) ~/jbossas-7/jbossas-7/tmp/jbossas-7.log
(v2) ~/tmp/jbossas.log
(v2) ~/tmp/jbosseap.log

This is reported to the user by the rhc output.

The server.log file is JBoss's internal log; the logs I mention are used to capture JVM stdout, and it's the JVM's signal handlers which produce these dumps, not JBoss.

Behavior has not changed since v1; this is neither a regression nor a bug.

Thanks.
Comment 2 Johnny Liu 2013-05-31 08:44:04 EDT
(In reply to Dan Mace from comment #1)
> Thread dumps generated with a SIGQUIT are logged to:
> 
> (v1) ~/jbosseap-6.0/jbosseap-6.0/tmp/jbosseap-6.0.log
> (v1) ~/jbossas-7/jbossas-7/tmp/jbossas-7.log
> (v2) ~/tmp/jbossas.log
> (v2) ~/tmp/jbosseap.log
> 
> This is reported to the user by the rhc output.
> 
> The server.log file is JBoss's internal log; the logs I mention are used to
> capture JVM stdout, and it's the JVM's signal handlers which produce these
> dumps, not JBoss.
> 
> Behavior has not changed since v1; this is neither a regression nor a bug.
> 
> Thanks.

Looks like broker is telling user to check thread dump file in server.log file.

$ rhc threaddump jbosseaptest
Success
The thread dump file will be available via: rhc tail jbosseaptest -f */logs/server.log -o '-n 250'

In the /var/lib/openshift/.cartridge_repository/redhat-jbosseap/0.0.1/bin/control, you will see the following lines:

function threaddump() {
        <--snip-->
        client_result "Success"
        client_result ""
        client_result "The thread dump file will be available via: rhc tail $OPENSHIFT_APP_NAME -f */logs/server.log -o '-n 250'"
        <--snip-->
}

It is telling user to incorrect file path.
Comment 3 Dan Mace 2013-05-31 08:57:37 EDT
Johnny,

You're right; good catch. Fixed by https://github.com/openshift/origin-server/pull/2705.
Comment 4 openshift-github-bot 2013-05-31 13:53:30 EDT
Commit pushed to master at https://github.com/openshift/origin-server

https://github.com/openshift/origin-server/commit/15f0855ac3669507860b9afdbdb1181a5ce6486a
Bug 969321: Fix jboss thread dump log file path message
Comment 5 Johnny Liu 2013-06-03 05:06:21 EDT
Verified this bug with devenv_3307, and PASS.

[jialiu@jialiu-pc1 ~]$ rhc threaddump jbosseaptest
Success
The thread dump file will be available via: rhc tail jbosseaptest -f /tmp/jbosseap.log -o '-n 250'
[jialiu@jialiu-pc1 ~]$ rhc tail jbosseaptest -f /tmp/jbosseap.log -o '-n 250'
	at sun.misc.Unsafe.park(Native Method)
	- parking to wait for  <0xc2811440> (a java.util.concurrent.SynchronousQueue$TransferStack)
	at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
	at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
	at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
	at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
	at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:722)
<--snip-->
Heap
 def new generation   total 17792K, used 10407K [0xbc040000, 0xbd380000, 0xc1590000)
  eden space 15872K,  53% used [0xbc040000, 0xbc889ca8, 0xbcfc0000)
  from space 1920K, 100% used [0xbd1a0000, 0xbd380000, 0xbd380000)
  to   space 1920K,   0% used [0xbcfc0000, 0xbcfc0000, 0xbd1a0000)
 tenured generation   total 38784K, used 22662K [0xc1590000, 0xc3b70000, 0xcc040000)
   the space 38784K,  58% used [0xc1590000, 0xc2bb1a20, 0xc2bb1c00, 0xc3b70000)
 compacting perm gen  total 32768K, used 32555K [0xcc040000, 0xce040000, 0xd2640000)
   the space 32768K,  99% used [0xcc040000, 0xce00af48, 0xce00b000, 0xce040000)
    ro space 10240K,  60% used [0xd2640000, 0xd2c52900, 0xd2c52a00, 0xd3040000)
    rw space 12288K,  63% used [0xd3040000, 0xd37e7910, 0xd37e7a00, 0xd3c40000)

Note You need to log in before you can comment on or make changes to this bug.