Bug 1083563 - Mod_cluster draining pending requests coud fail since deployments are missing dependency on mod_cluster service (seen on JDK8)
Summary: Mod_cluster draining pending requests coud fail since deployments are missing...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: JBoss Enterprise Application Platform 6
Classification: JBoss
Component: mod_cluster
Version: 6.3.0
Hardware: All
OS: All
unspecified
high
Target Milestone: DR5
: EAP 6.4.0
Assignee: Radoslav Husar
QA Contact: Michal Karm Babacek
URL:
Whiteboard:
Depends On:
Blocks: java8 1161079 1243874
TreeView+ depends on / blocked
 
Reported: 2014-04-02 13:14 UTC by Michal Karm Babacek
Modified: 2019-08-19 12:42 UTC (History)
6 users (show)

Fixed In Version:
Clone Of:
: 1161079 (view as bug list)
Environment:
Last Closed:
Type: Bug
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker MODCLUSTER-399 0 Major Closed Draining pending requests fails with Oracle JDK8 2020-10-22 04:33:53 UTC
Red Hat Issue Tracker WFLY-3942 0 Major Closed Race condition with clean shutdown and mod_cluster session draining 2020-10-22 04:33:40 UTC

Description Michal Karm Babacek 2014-04-02 13:14:58 UTC
Please, see https://issues.jboss.org/browse/MODCLUSTER-399

Comment 1 Michal Karm Babacek 2014-09-03 17:12:22 UTC
Update: This issue is still valid with the latest Oracle JDK 1.8.0_20 both on RHEL, Solaris and Windows systems. Please, see logs and descriptions on https://issues.jboss.org/browse/MODCLUSTER-399.

Comment 3 Radoslav Husar 2014-09-03 22:56:27 UTC
I assume mod_cluster is the right BZ component here, as JDK8 would not affect anything else.

Still under investigation.

Comment 4 Radoslav Husar 2014-09-05 00:25:30 UTC
I cannot reproduce the issue. The sessions drained on JDK8 as expected in isolated scenario. Can you isolate the issue into a simpler test?

Comment 6 Radim Hatlapatka 2014-10-01 13:13:16 UTC
I have prepared beaker with env setup. I am able to hit the issue almost every time. For details please see comments at https://issues.jboss.org/browse/MODCLUSTER-399, I have also send the email with info how to connect to that machine to Rado.

Comment 9 Michal Karm Babacek 2014-10-14 09:58:44 UTC
The bug is fixed in DR5.

eap 6.3.0 shutdown log [WRONG]

[org.jboss.web] (ServerService Thread Pool -- 65) JBAS018224: Unregister web context: /clusterbench
[org.jboss.web] (ServerService Thread Pool -- 67) JBAS018224: Unregister web context: /simplecontext
[org.apache.catalina.core] (MSC service thread 1-2) JBWEB001079: Container org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/] has not been started
[org.jboss.modcluster] (ServerService Thread Pool -- 66) MODCLUSTER000021: All pending requests drained from default-host:/clusterbench in 0.1 seconds
[org.jboss.modcluster] (ServerService Thread Pool -- 66) MODCLUSTER000002: Initiating mod_cluster shutdown
[org.apache.coyote.ajp] (MSC service thread 1-3) JBWEB003048: Pausing Coyote AJP/1.3 on ajp-/192.168.122.172:8009
[org.apache.coyote.ajp] (MSC service thread 1-3) JBWEB003051: Stopping Coyote AJP/1.3 on ajp-/192.168.122.172:8009


eap 6.4.DR5 shutdown log [GOOD]

[org.apache.coyote.http11.Http11Protocol] (MSC service thread 1-1) JBWEB003075: Coyote HTTP/1.1 pausing on: http-/192.168.122.172:8080
[org.apache.coyote.http11.Http11Protocol] (MSC service thread 1-1) JBWEB003077: Coyote HTTP/1.1 stopping on : http-/192.168.122.172:8080
[org.jboss.modcluster] (ServerService Thread Pool -- 63) MODCLUSTER000021: All pending requests drained from default-host:/clusterbench in 0.0 seconds
[org.jboss.modcluster] (ServerService Thread Pool -- 61) MODCLUSTER000021: All pending requests drained from default-host:/clusterbench in 0.0 seconds
[org.infinispan.eviction.PassivationManagerImpl] (ServerService Thread Pool -- 77) ISPN000029: Passivating all entries to disk
[org.infinispan.eviction.PassivationManagerImpl] (ServerService Thread Pool -- 77) ISPN000030: Passivated 0 entries in 2 milliseconds
[org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 77) JBAS010282: Stopped default-host/clusterbench cache from web container
[org.jboss.as.server.deployment] (MSC service thread 1-2) JBAS015877: Stopped deployment clusterbench.war (runtime-name: clusterbench.war) in 365ms
[org.infinispan.eviction.PassivationManagerImpl] (ServerService Thread Pool -- 79) ISPN000029: Passivating all entries to disk
[org.infinispan.eviction.PassivationManagerImpl] (ServerService Thread Pool -- 79) ISPN000030: Passivated 0 entries in 0 milliseconds
[org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 79) JBAS010282: Stopped repl cache from web container
[org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-1) ISPN000080: Disconnecting and closing JGroups Channel
[org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-1) ISPN000082: Stopping the RpcDispatcher
[org.jboss.modcluster] (ServerService Thread Pool -- 65) MODCLUSTER000025: Failed to drain 966 remaining active sessions from default-host:/simplecontext within 10.0 seconds
[org.jboss.modcluster] (ServerService Thread Pool -- 65) MODCLUSTER000021: All pending requests drained from default-host:/simplecontext in 10.0 seconds
[org.jboss.modcluster] (ServerService Thread Pool -- 61) MODCLUSTER000024: All active sessions drained from default-host:/simplecontext in 10.0 seconds
[org.jboss.as.server.deployment] (MSC service thread 1-3) JBAS015877: Stopped deployment simplecontext.war (runtime-name: simplecontext.war) in 10174ms
[org.jboss.modcluster] (ServerService Thread Pool -- 61) MODCLUSTER000021: All pending requests drained from default-host:/simplecontext in 10.0 seconds
[org.jboss.modcluster] (ServerService Thread Pool -- 61) MODCLUSTER000002: Initiating mod_cluster shutdown
[org.apache.coyote.ajp] (MSC service thread 1-3) JBWEB003048: Pausing Coyote AJP/1.3 on ajp-/192.168.122.172:8009
[org.apache.coyote.ajp] (MSC service thread 1-3) JBWEB003051: Stopping Coyote AJP/1.3 on ajp-/192.168.122.172:8009



Thank you for the effort put into fixing this ugly issue :-)

Comment 10 Radoslav Husar 2014-11-05 14:05:37 UTC
Updated the summary to be less confusing.


Note You need to log in before you can comment on or make changes to this bug.