Affects: Release Notes project_key: JBPAPP6 It seems that HornetQ servers are not able to create cluster when they're bound to IPv6 link-local addresses. Test scenario: 1. Start server A on fe80::210:18ff:fe2b:91c1 with cluster configuration: {code} ... <connectors> <netty-connector name="netty" socket-binding="messaging"/> <netty-connector name="netty-throughput" socket-binding="messaging-throughput"> <param key="batch-delay" value="50"/> </netty-connector> <in-vm-connector name="in-vm" server-id="0"/> </connectors> <acceptors> <netty-acceptor name="netty" socket-binding="messaging"/> <netty-acceptor name="netty-throughput" socket-binding="messaging-throughput"> <param key="batch-delay" value="50"/> <param key="direct-deliver" value="false"/> </netty-acceptor> <in-vm-acceptor name="in-vm" server-id="0"/> </acceptors> <broadcast-groups> <broadcast-group name="bg-group1"> <local-bind-address>fe80::210:18ff:fe2b:91c1</local-bind-address> <local-bind-port>56312</local-bind-port> <group-address>FF02:0:0:0:0:0:0:11</group-address> <group-port>9875</group-port> <broadcast-period>2000</broadcast-period> <connector-ref> netty </connector-ref> </broadcast-group> </broadcast-groups> <discovery-groups> <discovery-group name="dg-group1"> <local-bind-address>fe80::210:18ff:fe2b:91c1</local-bind-address> <group-address>FF02:0:0:0:0:0:0:11</group-address> <group-port>9875</group-port> <refresh-timeout>10000</refresh-timeout> </discovery-group> </discovery-groups> <cluster-connections> <cluster-connection name="my-cluster"> <address>jms</address> <connector-ref>netty</connector-ref> <retry-interval>1000</retry-interval> <use-duplicate-detection>true</use-duplicate-detection> <forward-when-no-consumers>false</forward-when-no-consumers> <max-hops>1</max-hops> <discovery-group-ref discovery-group-name="dg-group1"/> </cluster-connection> </cluster-connections> ... <interfaces> <interface name="management"> <inet-address value="fe80::210:18ff:fe2b:91c1"/> </interface> <interface name="public"> <inet-address value="fe80::210:18ff:fe2b:91c1"/> </interface> <interface name="unsecure"> <inet-address value="fe80::210:18ff:fe2b:91c1"/> </interface> </interfaces> <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}"> ... <socket-binding name="messaging" port="5445"/> <socket-binding name="messaging-throughput" port="5455"/> ... </socket-binding-group> {code} 2. Start server B on fe80::210:18ff:fe2b:91c4 with: {code} <connectors> <netty-connector name="netty" socket-binding="messaging"/> <netty-connector name="netty-throughput" socket-binding="messaging-throughput"> <param key="batch-delay" value="50"/> </netty-connector> <in-vm-connector name="in-vm" server-id="0"/> </connectors> <acceptors> <netty-acceptor name="netty" socket-binding="messaging"/> <netty-acceptor name="netty-throughput" socket-binding="messaging-throughput"> <param key="batch-delay" value="50"/> <param key="direct-deliver" value="false"/> </netty-acceptor> <in-vm-acceptor name="in-vm" server-id="0"/> </acceptors> <broadcast-groups> <broadcast-group name="bg-group1"> <local-bind-address>fe80::210:18ff:fe2b:91c4</local-bind-address> <local-bind-port>56312</local-bind-port> <group-address>FF02:0:0:0:0:0:0:11</group-address> <group-port>9875</group-port> <broadcast-period>2000</broadcast-period> <connector-ref> netty </connector-ref> </broadcast-group> </broadcast-groups> <discovery-groups> <discovery-group name="dg-group1"> <local-bind-address>fe80::210:18ff:fe2b:91c4</local-bind-address> <group-address>FF02:0:0:0:0:0:0:11</group-address> <group-port>9875</group-port> <refresh-timeout>10000</refresh-timeout> </discovery-group> </discovery-groups> <cluster-connections> <cluster-connection name="my-cluster"> <address>jms</address> <connector-ref>netty</connector-ref> <retry-interval>1000</retry-interval> <use-duplicate-detection>true</use-duplicate-detection> <forward-when-no-consumers>false</forward-when-no-consumers> <max-hops>1</max-hops> <discovery-group-ref discovery-group-name="dg-group1"/> </cluster-connection> </cluster-connections> <interfaces> <interface name="management"> <inet-address value="fe80::210:18ff:fe2b:91c4"/> </interface> <interface name="public"> <inet-address value="fe80::210:18ff:fe2b:91c4"/> </interface> <interface name="unsecure"> <inet-address value="fe80::210:18ff:fe2b:91c4"/> </interface> </interfaces> <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}"> ... <socket-binding name="messaging" port="5445"/> <socket-binding name="messaging-throughput" port="5455"/> ... </socket-binding-group> {code} There is used <group-address>FF02:0:0:0:0:0:0:11</group-address> which seems to be correct multicast address for IPv6 link-local addresses. Output from tcpdump on server B proves that multicast packet from server A are arriving: {code} [root@station4 bin]# tcpdump -t -n -i eth0 -s 512 -vv ip6 or proto ipv6 tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 512 bytes IP6 (hlim 1, next-header Fragment (44) payload length: 1456) fe80::210:18ff:fe2b:91c4 > ff02::11: frag (0x5cf703b7:0|1448) 56312 > sapv1: UDP, length 4096 IP6 (hlim 1, next-header Fragment (44) payload length: 1456) fe80::210:18ff:fe2b:91c4 > ff02::11: frag (0x5cf703b7:1448|1448) IP6 (hlim 1, next-header Fragment (44) payload length: 1216) fe80::210:18ff:fe2b:91c4 > ff02::11: frag (0x5cf703b7:2896|1208) IP6 (hlim 1, next-header Fragment (44) payload length: 1456) fe80::210:18ff:fe2b:91c1 > ff02::11: frag (0x51b69359:0|1448) 56312 > sapv1: UDP, length 4096 IP6 (hlim 1, next-header Fragment (44) payload length: 1456) fe80::210:18ff:fe2b:91c1 > ff02::11: frag (0x51b69359:1448|1448) ... {code} When servers are started then both of them are responding to IPv6 multicast ping: {code} [jbossqa@station1 internal]$ ping6 ff02::11%eth0 PING ff02::11%eth0(ff02::11) 56 data bytes 64 bytes from fe80::210:18ff:fe2b:91c1: icmp_seq=1 ttl=64 time=0.039 ms 64 bytes from fe80::210:18ff:fe2b:91c4: icmp_seq=1 ttl=64 time=0.203 ms ... {code} So it seems that both of the servers are registered to multicast group and broadcasting 'something' but this is not properly received on the other side. I'm not sure if there is configuration issue so I'm setting normal priority otherwise this is blocker for IPv6 certification. I'm attaching configuration from both of the servers. Note: Command for setting IPv6 link-local address to eth0 interface: {code}ip -6 address add fe80::210:18ff:fe2b:91c1/64 dev eth0{code} Output from netstat from server A: {code} netstat -npTl | grep java (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 fe80::210:18ff:fe2b:91c1:5455 :::* LISTEN 22604/java tcp 0 0 fe80::210:18ff:fe2b:91c1:9999 :::* LISTEN 22604/java tcp 0 0 fe80::210:18ff:fe2b:91c1:8080 :::* LISTEN 22604/java tcp 0 0 fe80::210:18ff:fe2b:91c1:4447 :::* LISTEN 22604/java tcp 0 0 fe80::210:18ff:fe2b:91c1:5445 :::* LISTEN 22604/java tcp 0 0 fe80::210:18ff:fe2b:91c1:9990 :::* LISTEN 22604/java tcp 0 0 :::3528 :::* LISTEN 22604/java udp 0 0 :::9875 :::* 22604/java udp 0 0 ::ffff:224.0.1.105:23364 :::* 22604/java udp 0 0 fe80::210:18ff:fe2b:91c1:56312 :::* 22604/java {code}
Attachment: Added: standalone-full-ha-A.xml Attachment: Added: standalone-full-ha-B.xml
Security: Removed: Public Added: JBoss Internal
Link: Added: This issue relates to JBPAPP-7470
This wasn't tested with global unicast IPv6 addresses. Still almost for sure that will the same result.
Labels: Added: eap6_prd_req
Link: Added: This issue is related to AS7-4140
Using link local addresses for initial testing is asking for trouble, as they have special problems. Try using either loopback interface addresses or global addresses. See AS7-4140 for some example configurations.
I've tested this issue against global IPv6 addresses on Fedora 14 and all seems OK. What I did was this: 1. make available two IPv6 global addresses on interface eth0 {noformat} # ip addr add 3ffe:ffff:100:f101:0:0:0:1 dev eth0 # ip addr add 3ffe:ffff:100:f101:0:0:0:2 dev eth0 {noformat} 2. Flush the ipttables to allow multicast messages through {noformat} # ip6tables -F {noformat} 3. Make two copies of AS 7.1.1.Final and edit the standalone-full-ha.xml files as follows: 3.1 use the same global multicast addresses where required in both copies {noformat} jgroups-udp -> ff0e::1 jgroups-mping -> ff0e::2 jgroups-diagnostics -> ff0e::3 modcluster -> ff0e::4 messaging broadcast group -> ff0e::5 messaging discovery group -> ff0e::5 {noformat} Andy Taylor confirmed that the broadcast group and the discovery groupshoulduse the same multicast channel (therefore same address and port combinations). 3.2 use the created unicast bind addresses where required, one per config - for example: {noformat} public interface bind address -> 3ffe:ffff:100:f101:0:0:0:1 management interface bind address -> 3ffe:ffff:100:f101:0:0:0:1 unsecure interface bind address -> 3ffe:ffff:100:f101:0:0:0:1 wsdl-host bind address -> 3ffe:ffff:100:f101:0:0:0:1 {noformat} 3.3. Disable security {noformat} <hornetq-server> <clustered>true</clustered> <security-enabled>false</security-enabled> <persistence-enabled>true</persistence-enabled> ... </hornetq-server> {noformat} 4. Edit standalone.conf to set JAVA_OPTS to use -Djava.net.preferIPv6Stack=false and -Djava.net.preferIPv6Addresses=true 5. Start the servers using the new configs, once in each of the two build directories {noformat} ./standalone.sh --server-config standalone-full-ha.xml {noformat} The output looked like this: {noformat} Server using 3ffe:ffff:100:f101:0:0:0:1: 12:44:40,046 ERROR [org.jboss.as] (Controller Boot Thread) JBAS015875: JBoss AS 7.1.1.Final-SNAPSHOT "Thunder" started (with errors) in 3620ms - Started 166 of 299 services (4 services failed or missing dependencies, 128 services are passive or on-demand) 12:44:43,574 INFO [org.hornetq.core.server.cluster.impl.BridgeImpl] (Thread-5 (HornetQ-server-HornetQServerImpl::serverUUID=57fe4b44-6cbc-11e1-8b82-d2d33e5320c1-1717214301)) Bridge ClusterConnectionBridge@246fee3a [name=sf.my-cluster.2d08ddad-6cbd-11e1-bb37-d2d33e5320c1, queue=QueueImpl[name=sf.my-cluster.2d08ddad-6cbd-11e1-bb37-d2d33e5320c1, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=57fe4b44-6cbc-11e1-8b82-d2d33e5320c1]]@30813486 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@246fee3a [name=sf.my-cluster.2d08ddad-6cbd-11e1-bb37-d2d33e5320c1, queue=QueueImpl[name=sf.my-cluster.2d08ddad-6cbd-11e1-bb37-d2d33e5320c1, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=57fe4b44-6cbc-11e1-8b82-d2d33e5320c1]]@30813486 targetConnector=ServerLocatorImpl [initialConnectors=[org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=3ffe:ffff:100:f101:0:0:0:2%2], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@613040470[nodeUUID=57fe4b44-6cbc-11e1-8b82-d2d33e5320c1, connector=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=3ffe:ffff:100:f101:0:0:0:1%2, address=jms, server=HornetQServerImpl::serverUUID=57fe4b44-6cbc-11e1-8b82-d2d33e5320c1])) [initialConnectors=[org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=3ffe:ffff:100:f101:0:0:0:2%2], discoveryGroupConfiguration=null]] is connected Server using 3ffe:ffff:100:f101:0:0:0:2: 12:43:59,330 INFO [org.jboss.as] (Controller Boot Thread) JBAS015874: JBoss AS 7.1.1.Final-SNAPSHOT "Thunder" started in 3479ms - Started 173 of 302 services (128 services are passive or on-demand) 12:44:43,610 INFO [org.hornetq.core.server.cluster.impl.BridgeImpl] (Thread-15 (HornetQ-server-HornetQServerImpl::serverUUID=2d08ddad-6cbd-11e1-bb37-d2d33e5320c1-715747282)) Bridge ClusterConnectionBridge@6808680 [name=sf.my-cluster.57fe4b44-6cbc-11e1-8b82-d2d33e5320c1, queue=QueueImpl[name=sf.my-cluster.57fe4b44-6cbc-11e1-8b82-d2d33e5320c1, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=2d08ddad-6cbd-11e1-bb37-d2d33e5320c1]]@5ffe40d5 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@6808680 [name=sf.my-cluster.57fe4b44-6cbc-11e1-8b82-d2d33e5320c1, queue=QueueImpl[name=sf.my-cluster.57fe4b44-6cbc-11e1-8b82-d2d33e5320c1, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=2d08ddad-6cbd-11e1-bb37-d2d33e5320c1]]@5ffe40d5 targetConnector=ServerLocatorImpl [initialConnectors=[org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=3ffe:ffff:100:f101:0:0:0:1%2], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@980942884[nodeUUID=2d08ddad-6cbd-11e1-bb37-d2d33e5320c1, connector=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=3ffe:ffff:100:f101:0:0:0:2%2, address=jms, server=HornetQServerImpl::serverUUID=2d08ddad-6cbd-11e1-bb37-d2d33e5320c1])) [initialConnectors=[org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=3ffe:ffff:100:f101:0:0:0:1%2], discoveryGroupConfiguration=null]] is connected {noformat} These seem to cluster fine. There is a problem with jacorb however complaining about address already in use, but that is another issue.
I think the problem with the original issue was that you are not specifying zone ids with the addresses you are using to indicate the interface. If you are using two link local addresses, specify the zone ids. If you are using one link local address, specify the zone ids and use port offsets to avoid port conflicts on the single address.
I tried with a single link-local address and zone id with port offsets and things blew up: The server starts OK: {noformat} [nrla@lenovo bin]$ ./standalone.sh --server-config standalone-full-ha.xml ========================================================================= JBoss Bootstrap Environment JBOSS_HOME: /tmp/as71 JAVA: /opt/jdk-1.6.0_26/bin/java JAVA_OPTS: -server -XX:+UseCompressedOops -XX:+TieredCompilation -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=false -Djava.net.preferIPv6Addresses=true -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -Djboss.server.default.config=standalone.xml ========================================================================= 13:02:26,744 INFO [org.jboss.modules] JBoss Modules version 1.1.1.GA 13:02:26,904 INFO [org.jboss.msc] JBoss MSC version 1.0.2.GA 13:02:26,948 INFO [org.jboss.as] JBAS015899: JBoss AS 7.1.1.Final-SNAPSHOT "Thunder" starting 13:02:28,062 INFO [org.xnio] XNIO Version 3.0.3.GA 13:02:28,067 INFO [org.jboss.as.server] JBAS015888: Creating http management service using socket-binding (management-http) 13:02:28,084 INFO [org.xnio.nio] XNIO NIO Implementation Version 3.0.3.GA 13:02:28,095 INFO [org.jboss.remoting] JBoss Remoting version 3.2.3.GA 13:02:28,143 INFO [org.jboss.as.logging] JBAS011502: Removing bootstrap log handlers 13:02:28,179 INFO [org.jboss.as.configadmin] (ServerService Thread Pool -- 34) JBAS016200: Activating ConfigAdmin Subsystem 13:02:28,195 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 39) JBAS010280: Activating Infinispan subsystem. 13:02:28,231 INFO [org.jboss.as.jacorb] (ServerService Thread Pool -- 40) JBAS016300: Activating JacORB Subsystem 13:02:28,265 INFO [org.jboss.as.clustering.jgroups] (ServerService Thread Pool -- 45) JBAS010260: Activating JGroups subsystem. 13:02:28,266 INFO [org.jboss.as.connector] (MSC service thread 1-4) JBAS010408: Starting JCA Subsystem (JBoss IronJacamar 1.0.9.Final) 13:02:28,303 INFO [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 35) JBAS010403: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3) 13:02:28,358 INFO [org.jboss.as.naming] (ServerService Thread Pool -- 52) JBAS011800: Activating Naming Subsystem 13:02:28,362 INFO [org.jboss.as.osgi] (ServerService Thread Pool -- 53) JBAS011940: Activating OSGi Subsystem 13:02:28,364 INFO [org.jboss.as.naming] (MSC service thread 1-7) JBAS011802: Starting Naming Service 13:02:28,377 INFO [org.jboss.as.security] (ServerService Thread Pool -- 58) JBAS013101: Activating Security Subsystem 13:02:28,381 INFO [org.jboss.as.mail.extension] (MSC service thread 1-2) JBAS015400: Bound mail session [java:jboss/mail/Default] 13:02:28,405 INFO [org.jboss.as.webservices] (ServerService Thread Pool -- 62) JBAS015537: Activating WebServices Extension 13:02:28,410 INFO [org.jboss.as.security] (MSC service thread 1-5) JBAS013100: Current PicketBox version=4.0.7.Final 13:02:28,416 INFO [org.jboss.jaxr] (MSC service thread 1-7) JBAS014000: Started JAXR subsystem, binding JAXR connection factory into JNDI as: java:jboss/jaxr/ConnectionFactory 13:02:28,640 INFO [org.jboss.as.modcluster] (MSC service thread 1-4) JBAS011704: Mod_cluster uses default load balancer provider 13:02:28,738 INFO [org.apache.coyote.ajp.AjpProtocol] (MSC service thread 1-2) Starting Coyote AJP/1.3 on ajp--fe80%3A0%3A0%3A0%3Af2de%3Af1ff%3Afe40%3A75b8%252-8109 13:02:28,743 INFO [org.jboss.ws.common.management.AbstractServerConfig] (MSC service thread 1-3) JBoss Web Services - Stack CXF Server 4.0.2.GA 13:02:28,751 INFO [org.apache.coyote.http11.Http11Protocol] (MSC service thread 1-8) Starting Coyote HTTP/1.1 on http--fe80%3A0%3A0%3A0%3Af2de%3Af1ff%3Afe40%3A75b8%252-8180 13:02:28,788 INFO [org.jboss.modcluster.ModClusterService] (MSC service thread 1-4) Initializing mod_cluster 1.2.0.Final 13:02:28,788 INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 39) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be pasivated. 13:02:28,814 INFO [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 39) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be pasivated. 13:02:28,864 INFO [org.jboss.modcluster.advertise.impl.AdvertiseListenerImpl] (MSC service thread 1-4) Listening to proxy advertisements on 224.0.1.105:23364 13:02:29,017 INFO [org.jboss.as.jacorb] (MSC service thread 1-1) JBAS016330: CORBA ORB Service started 13:02:29,071 INFO [org.hornetq.core.server.impl.HornetQServerImpl] (MSC service thread 1-4) live server is starting with configuration HornetQ Configuration (clustered=true,backup=false,sharedStore=true,journalDirectory=/tmp/as71/standalone/data/messagingjournal,bindingsDirectory=/tmp/as71/standalone/data/messagingbindings,largeMessagesDirectory=/tmp/as71/standalone/data/messaginglargemessages,pagingDirectory=/tmp/as71/standalone/data/messagingpaging) 13:02:29,074 INFO [org.hornetq.core.server.impl.HornetQServerImpl] (MSC service thread 1-4) Waiting to obtain live lock 13:02:29,101 WARN [jacorb.iiop.address] (MSC service thread 1-1) init_host, fe80:0:0:0:f2de:f1ff:fe40:75b8%2 is local-link address 13:02:29,149 INFO [org.hornetq.core.persistence.impl.journal.JournalStorageManager] (MSC service thread 1-4) Using AIO Journal 13:02:29,161 WARN [jacorb.iiop.address] (MSC service thread 1-1) init_host, fe80:0:0:0:f2de:f1ff:fe40:75b8%2 is local-link address 13:02:29,168 INFO [org.jboss.as.remoting] (MSC service thread 1-7) JBAS017100: Listening on /fe80:0:0:0:f2de:f1ff:fe40:75b8%2:10099 13:02:29,168 INFO [org.jboss.as.remoting] (MSC service thread 1-6) JBAS017100: Listening on fe80:0:0:0:f2de:f1ff:fe40:75b8%2/fe80:0:0:0:f2de:f1ff:fe40:75b8%2:4547 13:02:29,174 WARN [jacorb.iiop.address] (MSC service thread 1-1) init_host, fe80:0:0:0:f2de:f1ff:fe40:75b8%2 is local-link address 13:02:29,175 INFO [org.jboss.as.server.deployment.scanner] (MSC service thread 1-8) JBAS015012: Started FileSystemDeploymentService for directory /tmp/as71/standalone/deployments 13:02:29,184 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-8) JBAS010400: Bound data source [java:jboss/datasources/ExampleDS] 13:02:29,190 WARN [jacorb.iiop.address] (MSC service thread 1-1) init_host, fe80:0:0:0:f2de:f1ff:fe40:75b8%2 is local-link address 13:02:29,227 INFO [org.jboss.as.jacorb] (MSC service thread 1-1) JBAS016328: CORBA Naming Service started 13:02:29,368 INFO [org.hornetq.core.server.impl.AIOFileLockNodeManager] (MSC service thread 1-4) Waiting to obtain live lock 13:02:29,369 INFO [org.hornetq.core.server.impl.AIOFileLockNodeManager] (MSC service thread 1-4) Live Server Obtained live lock 13:02:29,738 WARN [org.hornetq.core.server.cluster.impl.BroadcastGroupImpl] (MSC service thread 1-4) local-bind-address specified for broadcast group but no local-bind-port specified so socket will NOT be bound to a local address/port 13:02:29,787 INFO [org.hornetq.core.remoting.impl.netty.NettyAcceptor] (MSC service thread 1-4) Started Netty Acceptor version 3.2.5.Final-a96d88c fe80:0:0:0:f2de:f1ff:fe40:75b8%2:5555 for CORE protocol 13:02:29,790 INFO [org.hornetq.core.remoting.impl.netty.NettyAcceptor] (MSC service thread 1-4) Started Netty Acceptor version 3.2.5.Final-a96d88c fe80:0:0:0:f2de:f1ff:fe40:75b8%2:5545 for CORE protocol 13:02:29,791 INFO [org.hornetq.core.server.impl.HornetQServerImpl] (MSC service thread 1-4) Server is now live 13:02:29,792 INFO [org.hornetq.core.server.impl.HornetQServerImpl] (MSC service thread 1-4) HornetQ Server version 2.2.13.Final (HQ_2_2_13_FINAL_AS7, 122) [57fe4b44-6cbc-11e1-8b82-d2d33e5320c1]) started 13:02:29,809 INFO [org.hornetq.core.server.impl.HornetQServerImpl] (MSC service thread 1-4) trying to deploy queue jms.queue.testQueue 13:02:29,824 INFO [org.jboss.as.messaging] (MSC service thread 1-4) JBAS011601: Bound messaging object to jndi name java:/queue/test 13:02:29,838 INFO [org.jboss.as.messaging] (MSC service thread 1-4) JBAS011601: Bound messaging object to jndi name java:jboss/exported/jms/queue/test 13:02:29,850 INFO [org.jboss.as.messaging] (MSC service thread 1-1) JBAS011601: Bound messaging object to jndi name java:jboss/exported/jms/RemoteConnectionFactory 13:02:29,851 INFO [org.jboss.as.messaging] (MSC service thread 1-1) JBAS011601: Bound messaging object to jndi name java:/RemoteConnectionFactory 13:02:29,853 INFO [org.jboss.as.messaging] (MSC service thread 1-5) JBAS011601: Bound messaging object to jndi name java:/ConnectionFactory 13:02:29,854 INFO [org.hornetq.core.server.impl.HornetQServerImpl] (MSC service thread 1-7) trying to deploy queue jms.topic.testTopic 13:02:29,874 INFO [org.jboss.as.deployment.connector] (MSC service thread 1-6) JBAS010406: Registered connection factory java:/JmsXA 13:02:29,884 INFO [org.hornetq.ra.HornetQResourceAdapter] (MSC service thread 1-6) HornetQ resource adaptor started 13:02:29,885 INFO [org.jboss.as.connector.services.ResourceAdapterActivatorService$ResourceAdapterActivator] (MSC service thread 1-6) IJ020002: Deployed: file://RaActivatorhornetq-ra 13:02:29,887 INFO [org.jboss.as.deployment.connector] (MSC service thread 1-6) JBAS010401: Bound JCA ConnectionFactory [java:/JmsXA] 13:02:29,963 INFO [org.jboss.as.messaging] (MSC service thread 1-7) JBAS011601: Bound messaging object to jndi name java:/topic/test 13:02:29,964 INFO [org.jboss.as.messaging] (MSC service thread 1-7) JBAS011601: Bound messaging object to jndi name java:jboss/exported/jms/topic/test 13:02:30,062 INFO [org.jboss.as] (Controller Boot Thread) JBAS015951: Admin console listening on http://[fe80:0:0:0:f2de:f1ff:fe40:75b8%2]:10090 13:02:30,062 INFO [org.jboss.as] (Controller Boot Thread) JBAS015874: JBoss AS 7.1.1.Final-SNAPSHOT "Thunder" started in 3528ms - Started 173 of 302 services (128 services are passive or on-demand) {noformat} but then goes wild when I start the second server and the Bridge tries to connect: {noformat} 13:06:10,285 ERROR [org.hornetq.core.remoting.impl.netty.NettyConnector] (Thread-3 (HornetQ-server-HornetQServerImpl::serverUUID=57fe4b44-6cbc-11e1-8b82-d2d33e5320c1-1468014175)) Failed to create netty connection: java.net.SocketTimeoutException: connect timed out at java.net.PlainSocketImpl.socketConnect(Native Method) [rt.jar:1.6.0_26] at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351) [rt.jar:1.6.0_26] at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213) [rt.jar:1.6.0_26] at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200) [rt.jar:1.6.0_26] at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) [rt.jar:1.6.0_26] at java.net.Socket.connect(Socket.java:529) [rt.jar:1.6.0_26] at org.jboss.netty.channel.socket.oio.OioClientSocketPipelineSink.connect(OioClientSocketPipelineSink.java:114) [netty-3.2.6.Final.jar:] at org.jboss.netty.channel.socket.oio.OioClientSocketPipelineSink.eventSunk(OioClientSocketPipelineSink.java:74) [netty-3.2.6.Final.jar:] at org.jboss.netty.channel.Channels.connect(Channels.java:541) [netty-3.2.6.Final.jar:] at org.jboss.netty.channel.AbstractChannel.connect(AbstractChannel.java:210) [netty-3.2.6.Final.jar:] at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:227) [netty-3.2.6.Final.jar:] at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:188) [netty-3.2.6.Final.jar:] at org.hornetq.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:473) [hornetq-core-2.2.13.Final.jar:] at org.hornetq.core.client.impl.ClientSessionFactoryImpl.getConnection(ClientSessionFactoryImpl.java:1143) [hornetq-core-2.2.13.Final.jar:] at org.hornetq.core.client.impl.ClientSessionFactoryImpl.getConnectionWithRetry(ClientSessionFactoryImpl.java:993) [hornetq-core-2.2.13.Final.jar:] at org.hornetq.core.client.impl.ClientSessionFactoryImpl.connect(ClientSessionFactoryImpl.java:224) [hornetq-core-2.2.13.Final.jar:] at org.hornetq.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:747) [hornetq-core-2.2.13.Final.jar:] at org.hornetq.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:588) [hornetq-core-2.2.13.Final.jar:] at org.hornetq.core.client.impl.ServerLocatorImpl$3.run(ServerLocatorImpl.java:549) [hornetq-core-2.2.13.Final.jar:] at org.hornetq.utils.OrderedExecutorFactory$OrderedExecutor$1.run(OrderedExecutorFactory.java:100) [hornetq-core-2.2.13.Final.jar:] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_26] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_26] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_26] 13:06:10,289 ERROR [org.hornetq.core.remoting.impl.netty.NettyConnector] (Thread-29 (HornetQ-server-HornetQServerImpl::serverUUID=57fe4b44-6cbc-11e1-8b82-d2d33e5320c1-1468014175)) Failed to create netty connection: java.net.SocketTimeoutException: connect timed out at java.net.PlainSocketImpl.socketConnect(Native Method) [rt.jar:1.6.0_26] at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351) [rt.jar:1.6.0_26] at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213) [rt.jar:1.6.0_26] at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200) [rt.jar:1.6.0_26] at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) [rt.jar:1.6.0_26] at java.net.Socket.connect(Socket.java:529) [rt.jar:1.6.0_26] at org.jboss.netty.channel.socket.oio.OioClientSocketPipelineSink.connect(OioClientSocketPipelineSink.java:114) [netty-3.2.6.Final.jar:] at org.jboss.netty.channel.socket.oio.OioClientSocketPipelineSink.eventSunk(OioClientSocketPipelineSink.java:74) [netty-3.2.6.Final.jar:] at org.jboss.netty.channel.Channels.connect(Channels.java:541) [netty-3.2.6.Final.jar:] at org.jboss.netty.channel.AbstractChannel.connect(AbstractChannel.java:210) [netty-3.2.6.Final.jar:] at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:227) [netty-3.2.6.Final.jar:] at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:188) [netty-3.2.6.Final.jar:] at org.hornetq.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:473) [hornetq-core-2.2.13.Final.jar:] at org.hornetq.core.client.impl.ClientSessionFactoryImpl.getConnection(ClientSessionFactoryImpl.java:1143) [hornetq-core-2.2.13.Final.jar:] at org.hornetq.core.client.impl.ClientSessionFactoryImpl.getConnectionWithRetry(ClientSessionFactoryImpl.java:993) [hornetq-core-2.2.13.Final.jar:] at org.hornetq.core.client.impl.ClientSessionFactoryImpl.connect(ClientSessionFactoryImpl.java:224) [hornetq-core-2.2.13.Final.jar:] at org.hornetq.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:663) [hornetq-core-2.2.13.Final.jar:] at org.hornetq.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:619) [hornetq-core-2.2.13.Final.jar:] at org.hornetq.core.server.cluster.impl.ClusterConnectionBridge.createSessionFactory(ClusterConnectionBridge.java:152) [hornetq-core-2.2.13.Final.jar:] at org.hornetq.core.server.cluster.impl.BridgeImpl.connect(BridgeImpl.java:729) [hornetq-core-2.2.13.Final.jar:] at org.hornetq.core.server.cluster.impl.BridgeImpl$ConnectRunnable.run(BridgeImpl.java:1005) [hornetq-core-2.2.13.Final.jar:] at org.hornetq.utils.OrderedExecutorFactory$OrderedExecutor$1.run(OrderedExecutorFactory.java:100) [hornetq-core-2.2.13.Final.jar:] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_26] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_26] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_26] {noformat}
So, further investigation required. Could be a netty issue, but netty is also used in Infinispan (but I don't think they have done any IPv6 testing there). I'll log the JACORB issue.
Logged. https://issues.jboss.org/browse/AS7-4156
Hi Richard, sorry for late reply. I'll try to get it working again according to your comment. I've realized that there is also "ip6tables -F" and not only "iptables -F". This could be the main problem in my case. Mirek
Assigning back to Mirek for further investigation.
I'll check it in ER4 test cycle.
Hi Richard, I moved to QA lab and here I finally got same results as you. Testing on Ipv6 global address and -Djava.net.preferIPv4Stack=false and -Djava.net.preferIPv6Addresses=true - Ok (cluster was created) Testing on Ipv6 link local addresses and -Djava.net.preferIPv4Stack=false and -Djava.net.preferIPv6Addresses=true - I can see log messages that bridge was created but I also see the same exception as you on second server. IPv6 link local addresses: server A: inet6 addr: fe80::21d:9ff:fe01:cc36/64 Scope:Link server B: inet6 addr: fe80::216:36ff:fe34:881d/64 Scope:Link Log from second server: {code} ... 06:44:52,784 DEBUG [org.hornetq.core.server.cluster.impl.BridgeImpl] (Thread-13 (HornetQ-server-HornetQServerImpl::serverUUID=55ce53dc-7e3f-11e1-9723-00163634881d-2142988662)) Connecting ClusterConnectionBridge@2d52912f [name=sf.my-cluster.4f217eef-7e3f-11e1-86ce-001d0901cc36, queue=QueueImpl[name=sf.my-cluster.4f217eef-7e3f-11e1-86ce-001d0901cc36, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=55ce53dc-7e3f-11e1-9723-00163634881d]]@7b9bbe8 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@2d52912f [name=sf.my-cluster.4f217eef-7e3f-11e1-86ce-001d0901cc36, queue=QueueImpl[name=sf.my-cluster.4f217eef-7e3f-11e1-86ce-001d0901cc36, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=55ce53dc-7e3f-11e1-9723-00163634881d]]@7b9bbe8 targetConnector=ServerLocatorImpl [initialConnectors=[org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=fe80:0:0:0:21d:9ff:fe01:cc36%2], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@1783559459[nodeUUID=55ce53dc-7e3f-11e1-9723-00163634881d, connector=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=fe80:0:0:0:216:36ff:fe34:881d%2, address=jms, server=HornetQServerImpl::serverUUID=55ce53dc-7e3f-11e1-9723-00163634881d])) [initialConnectors=[org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=fe80:0:0:0:21d:9ff:fe01:cc36%2], discoveryGroupConfiguration=null]] to its destination [55ce53dc-7e3f-11e1-9723-00163634881d], csf=null ... 06:44:52,928 INFO [org.hornetq.core.server.cluster.impl.BridgeImpl] (Thread-13 (HornetQ-server-HornetQServerImpl::serverUUID=55ce53dc-7e3f-11e1-9723-00163634881d-2142988662)) Bridge ClusterConnectionBridge@2d52912f [name=sf.my-cluster.4f217eef-7e3f-11e1-86ce-001d0901cc36, queue=QueueImpl[name=sf.my-cluster.4f217eef-7e3f-11e1-86ce-001d0901cc36, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=55ce53dc-7e3f-11e1-9723-00163634881d]]@7b9bbe8 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@2d52912f [name=sf.my-cluster.4f217eef-7e3f-11e1-86ce-001d0901cc36, queue=QueueImpl[name=sf.my-cluster.4f217eef-7e3f-11e1-86ce-001d0901cc36, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=55ce53dc-7e3f-11e1-9723-00163634881d]]@7b9bbe8 targetConnector=ServerLocatorImpl [initialConnectors=[org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=fe80:0:0:0:21d:9ff:fe01:cc36%2], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@1783559459[nodeUUID=55ce53dc-7e3f-11e1-9723-00163634881d, connector=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=fe80:0:0:0:216:36ff:fe34:881d%2, address=jms, server=HornetQServerImpl::serverUUID=55ce53dc-7e3f-11e1-9723-00163634881d])) [initialConnectors=[org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=fe80:0:0:0:21d:9ff:fe01:cc36%2], discoveryGroupConfiguration=null]] is connected ... 06:23:16,824 ERROR [org.hornetq.core.remoting.impl.netty.NettyConnector] (Thread-7 (HornetQ-server-HornetQServerImpl::serverUUID=55ce53dc-7e3f-11e1-9723-00163634881d-509841124)) Failed to create netty connection: java.net.SocketTimeoutException: connect timed out at java.net.PlainSocketImpl.socketConnect(Native Method) [rt.jar:1.6.0_30] at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351) [rt.jar:1.6.0_30] at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213) [rt.jar:1.6.0_30] at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200) [rt.jar:1.6.0_30] at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) [rt.jar:1.6.0_30] at java.net.Socket.connect(Socket.java:529) [rt.jar:1.6.0_30] at org.jboss.netty.channel.socket.oio.OioClientSocketPipelineSink.connect(OioClientSocketPipelineSink.java:114) [netty-3.2.6.Final-redhat-1.jar:] at org.jboss.netty.channel.socket.oio.OioClientSocketPipelineSink.eventSunk(OioClientSocketPipelineSink.java:74) [netty-3.2.6.Final-redhat-1.jar:] at org.jboss.netty.channel.Channels.connect(Channels.java:541) [netty-3.2.6.Final-redhat-1.jar:] at org.jboss.netty.channel.AbstractChannel.connect(AbstractChannel.java:210) [netty-3.2.6.Final-redhat-1.jar:] at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:227) [netty-3.2.6.Final-redhat-1.jar:] at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:188) [netty-3.2.6.Final-redhat-1.jar:] at org.hornetq.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:473) [hornetq-core-2.2.13.Final-redhat-1.jar:] at org.hornetq.core.client.impl.ClientSessionFactoryImpl.getConnection(ClientSessionFactoryImpl.java:1143) [hornetq-core-2.2.13.Final-redhat-1.jar:] at org.hornetq.core.client.impl.ClientSessionFactoryImpl.getConnectionWithRetry(ClientSessionFactoryImpl.java:993) [hornetq-core-2.2.13.Final-redhat-1.jar:] at org.hornetq.core.client.impl.ClientSessionFactoryImpl.connect(ClientSessionFactoryImpl.java:224) [hornetq-core-2.2.13.Final-redhat-1.jar:] at org.hornetq.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:747) [hornetq-core-2.2.13.Final-redhat-1.jar:] at org.hornetq.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:588) [hornetq-core-2.2.13.Final-redhat-1.jar:] at org.hornetq.core.client.impl.ServerLocatorImpl$4.run(ServerLocatorImpl.java:1510) [hornetq-core-2.2.13.Final-redhat-1.jar:] at org.hornetq.utils.OrderedExecutorFactory$OrderedExecutor$1.run(OrderedExecutorFactory.java:100) [hornetq-core-2.2.13.Final-redhat-1.jar:] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_30] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_30] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_30] {code} Another problem seems to be that it takes quite long time to cleanly shutdown servers when link-local addresses are used.
Here could be an issue with zone-ids. In theory this could happen: 1. Server A sends its ipv6 link local address with its zoneid in connector to server B in form - ipv6-server-A-address%zone-idA 2. Server B takes this whole string (ipv6-server-A-address%zone-idA) and use it to connect server A. 3. But server B should use zone-idB to connect server A. (ipv6-server-A-address%zone-idB)
Link: Added: This issue is a dependency of JBPAPP-8663
This problem does not occur on Sun JDK 7 when using link-local addresses where the scope is specified. Aside from the problem with JacORB (https://issues.jboss.org/browse/AS7-4156) the two HornetQ instances start and the Bridge connects correctly on Fedora 16: {noformat} 12:22:21,866 INFO [org.hornetq.core.server.cluster.impl.BridgeImpl] (Thread-11 (HornetQ-server-HornetQServerImpl::serverUUID=ea4016b5-83ea-11e1-9ebc-f0def14075b8-812973087)) Bridge ClusterConnectionBridge@f028fec [name=sf.my-cluster.d84a4fed-83ea-11e1-a48a-f0def14075b8, queue=QueueImpl[name=sf.my-cluster.d84a4fed-83ea-11e1-a48a-f0def14075b8, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=ea4016b5-83ea-11e1-9ebc-f0def14075b8]]@7136d6a6 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@f028fec [name=sf.my-cluster.d84a4fed-83ea-11e1-a48a-f0def14075b8, queue=QueueImpl[name=sf.my-cluster.d84a4fed-83ea-11e1-a48a-f0def14075b8, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=ea4016b5-83ea-11e1-9ebc-f0def14075b8]]@7136d6a6 targetConnector=ServerLocatorImpl [initialConnectors=[org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=fe80:0:0:0:f2de:f1ff:fe40:75b8%2], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@732165759[nodeUUID=ea4016b5-83ea-11e1-9ebc-f0def14075b8, connector=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=fe80:0:0:0:f2de:f1ff:fe40:75b9%2, address=jms, server=HornetQServerImpl::serverUUID=ea4016b5-83ea-11e1-9ebc-f0def14075b8])) [initialConnectors=[org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=fe80:0:0:0:f2de:f1ff:fe40:75b8%2], discoveryGroupConfiguration=null]] is connected {noformat} I believe this issue on Sun JDK 6 is the same one we ran into with the JioEndpoint (https://issues.jboss.org/browse/AS7-3834) and which Jean-Frederic fixed by replacing {noformat} s = new Socket(address, port) ; {noformat} with {noformat} s = new Socket(addressWithoutZoneId, port, addressWithoutZoneId, 0) ; {noformat} However, the socket creation in this case is performed by Netty, and so any changes would have to take place there.
Labels: Removed: eap6_prd_req Added: eap6_need_triage
I have the impression that we are trying to solve an environmental issue with code changes? Even if this is a required change (which I'm not convinced yet), we would need someone with Netty knowledge to make the change. We could either wait the guy to start or have someone learning the codebase... Which will take time anyway.. Based on that I suggest we postpone this JIRA and make it a known issue... we can suggest JDK1.7 for people using link-local address and IPV6 IMO
I'm opening this jira to the public. I need to ask some external help and it wouldn't be viewed externally. I don't see any customer issues releated here, nor security issues.. hence I'm making it public.
Security: Removed: JBoss Internal Added: Public
I have asked some help to Norman Mauerer who's a Netty specialist.
I don't see the connectors putting host and port, what makes this a configuration issue? <param key="host" value="${jboss.bind.address:localhost}"/> <param key="port" value="${hornetq.remoting.netty.port:5445}"/>
Labels: Removed: eap6_need_triage Added: eap6_ipv6 eap6_need_triage
It should be: s = new Socket(addressWithoutZoneId, port, addressWithZoneId, 0) ; Otherwise it might pick the wrong interface.
BTW this isn't an environmental issue. It's a JDK bug. If you are using link local addresses, zone id's are used to distinguish which interface (so its possible both interfaces could use the same address).
Netty with ipv6 workaround for NIO and OIO
Attachment: Added: netty-3.4.1.Final-with-ipv6-fix-SNAPSHOT.jar
I have uploaded a patched netty version which should work around this problem. Could you please test if this fix it ? It should be a drop in replacement of your currently used netty jar. I patched it the same way as Jason Greene suggested (using the address with zoneId for localaddress the the one without it for the remote address). Please let me know if it work out. If so I will push a new netty release with the fix included ASAP.
Testing on Fedora 16...
Norman The version I see in AS7 master and EAP for netty is 3.2.6.Final. Is messaging pulling in another version that it depends on? Your jar is marked as 3.4.1.Final. Also, the packaging type for netty is marked bundle. Do I need to take an special precautions when replacing the jar with the maven install-file plugin, other than setting packaging to bundle?
No that should be enough for testing. The groupId changed from org.jboss.netty to io.netty but that should be not relevant during testing
I'm not sure how you want me to test. I have a 3.4.1.Final jar and I need to replace a 3.2.6.Final jar in AS7. What do you propose I do, specifically? Change the netty version in AS7 across the board? Or just pretend the 3.4.1 jar is a 3.2.6. jar? I would have thyought the changed package names are going to break the build.
Just replace the 3.2.6.Final jar with the 3.4.0.Final one.. So something like this: # cp netty-3.4.0.Final.jar netty-3.2.6.Final.jar After that fire up the cluster
Packagenames are the same in all 3.x releases
this is an issue with Netty and we need to fix it in Netty... We are opening a new JIRA where we will apply a workaround on HornetQ, and keep this one for Netty.
Link: Added: This issue is related to JBPAPP-8723
I'm working on one-off but I'll take a look too.
Just let me know if the netty jar fix it.. So we can have the fix in the next release
Here is what I did: 1. find all instances of the metty jar in the AS distribution {noformat} [nrla@lenovo serverB]$ find . -name netty-3.2.6.Final.jar -print ./modules/org/jboss/netty/main/netty-3.2.6.Final.jar {noformat} 2. rname the original to old and copy in the new version {noformat} [nrla@lenovo serverA]$ ls modules/org/jboss/netty/main/ module.xml netty-3.2.6.Final.jar netty-3.2.6.Final.jar.index netty-3.2.6.Final.jar.old {noformat} 3. start the servers {noformat} ./standalone.sh --server-config standalone-full-ha.xml -Djboss.bind.address=fe80::f2de:f1ff:fe40:75b8%2 -Djboss.bind.address.management=fe80::f2de:f1ff:fe40:75b8%2 -Djboss.bind.address.unsecure=fe80::f2de:f1ff:fe40:75b8%2 -Djboss.bind.address.messaging=fe80::f2de:f1ff:fe40:75b8%2 ./standalone.sh --server-config standalone-full-ha.xml -Djboss.bind.address=fe80::f2de:f1ff:fe40:75b9%2 -Djboss.bind.address.management=fe80::f2de:f1ff:fe40:75b9%2 -Djboss.bind.address.unsecure=fe80::f2de:f1ff:fe40:75b9%2 -Djboss.bind.address.messaging=fe80::f2de:f1ff:fe40:75b9 {noformat} - servers start but don't cluster Please patch the correct version of netty so that I can rebuild the AS and test correctly. AS7 is a maven build and the jar needs to be installed in maven and AS7 rebuilt to have any kind of decent testing of this patch.
ok let me patch netty 3.2.6.Final but It should make no difference..
@Norman Could you compile it with JDK 1.6, please? We don't have JDK 1.7 in QA lab.
I can do that but we usually only use jdk7 features if this version is detected on runtime. So it must also work with java6 (in fact 3.x is even java5 compatible)
hmm...I tried it with Oracle JDK 1.7.2 and see some strange errors in console logs like npe and "Failed to accept a connection.: java.lang.NoClassDefFoundError: Could not initialize class org.jboss.netty.util.internal.LinkedTransferQueue" I tried it with jboss-eap-6.0.0.ER4.1.zip because for it I've configuration prepared. Console log from server1: http://pastebin.test.redhat.com/85280 Console log from server2: http://pastebin.test.redhat.com/85281
Probably the same problem as Richard has :-)
ipv6 fix backported
Attachment: Added: netty-3.2.6.Final-ipv6-fix.jar
Link: Removed: This issue is a dependency of JBPAPP-8663
Try the netty-3.2.6.Final-ipv6-fix.jar. Its just the 3.2.6.Final with the patch on top of it (compiled with java6)
@Miroslav I have no access to the servers so I can only guess but do you use the LinkedTransferQueue class in your code base ?
The last jar looks promising. I don't see any error messages and it seems from the console log that hornetq cluster was really created. I'll try send some messages to first and read from second server. server1: http://pastebin.test.redhat.com/85296 server2: http://pastebin.test.redhat.com/85297 I see a problem. I cannot cleanly shutdown the servers. It simply do nothing when type "ctrl-c". I can't even get the thread dump by "ctrl-\" or "kill -3 ...".
Cool :-), hornetq cluster was really created. All messages sent to server 1 were transferred to server 2 and consumed from server 2.
All the testing were done with EAP 6 ER4.1. There is HQ version 2.2.13 and Netty 3.2.6. Unfortunately the shut-down issue is still valid. I experienced it also with not-patched netty 3.2.6. Is there any other way how to get thread dump?
It took 10 min until the servers were shut-downed.
we just need to validate the netty change and ask Norman to do a release if this fix the issues.
@Miro what shutdown issue? that won't be related to IPV6?
I experienced slow shut-down only with ipv6 link-local addresses. With ipv4 or ipv6 global addresses it's ok. Richard told that with AS7 master he has still problems to create cluster. I'm just building AS7 master and I'd like to try patched netty.jar there. Unfortunately I have to configure it all over again.
Just tried with AS7 master and patched netty 3.2.6 jar and everything is ok. Good news is that there is no shut down issue as with EAP 6 ER4.1 (probably already fixed). This jira can be set as resolved after netty.jar will be updated. Server 1 log: http://pastebin.test.redhat.com/85312 Server 2 log: http://pastebin.test.redhat.com/85313 Producer connected server 2 log: http://pastebin.test.redhat.com/85314 Consumer connected to server 1 log: http://pastebin.test.redhat.com/85315
Ok so the netty fix did work out :) I will push it to netty upstream.... The question is now how to get a new release of netty that works with hornetq.. From one comment it seems like 3.4.x will not work because you use an internal class (LinkedTransferQueue) that we moved... The 3.2.x series was not expected to get another release so we moved ahead. I think we could do two things: * Patch Hornetq (and other products) to work with netty-3.4.x * Release a patched netty version out of the 3.2.x series. this would be called 3.2.8.Final I would prefer the first but I can understand that this may be not easy.. Anyway I would try to assist you in both cases.
I'm still not sure why there are NoClassDefFoundError with patched netty-3.4.x. I'm uploading configuration files which I used for ipv6 testing with AS7 master (as7-conf.zip). I'm not the one who should decide but I'd prefer the safer way (the second thing) to apply now and start to work where is the problem with the first. Reason for this is that I could start to develop tests for HQ with IPv6 this week and get to other potential issues.
Attaching configuration files for AS7 master (2012-04-16)
Attachment: Added: as7-conf.zip
Ok fix is commited to netty upstream. See https://github.com/netty/netty/commit/3558eb70420e389b498b8d9b5d35efe24273c2b3
@Miroslav: I just double-checked and 3.4.x should work too. The org.jboss.netty.util.internal.LinkedTransferQueue still exist. The only thing we changed was to move the old LinkedTransferQueue to LegancyLinkedTransferQueue and add a new LinkedTransferQueue which depends on java6. Do you by any chance run the tests with java 5 ?
AS7 server cannot be started with JDK 1.5 only with 1.6 and 1.7 so unfortunately no. I'm not sure whether I understand it right that problem with LinkedTransferQueue is probably fixed. :-) If you upload new 3.4.x jar then I'll try.
Attachment: Added: netty-3.4.1.Final-SNAPSHOT.jar
Ok please checkout netty-3.4.1.Final-SNAPSHOT.jar. If it not work please attach the log here as I'm not able to access any redhat servers (yet ;))
Unfortunately still the same problem. Tried with oracle JDK 1.6 and 1.7. I've just started both of the servers. Exceptions starts to occur when HQ starts to create cluster. There is no application deployed on servers. Server 1 log: http://pastebin.test.redhat.com/85368 Server 2 log: http://pastebin.test.redhat.com/85369
Could you please attach the log here ?
Attachment: Added: server1.log Attachment: Added: server2.log
Logs attached.
Ok I see.. It seems like the ClassLoader does somehow "disallow" netty to load com.sun.Unsafe which is needed by the new LinkedTransferQueue. Let me upload another netty 3.4.x jar which should workaround this.
Attachment: Added: netty-3.4.1.Final-SNAPSHOT-v2.jar
Please try again with netty-3.4.1.Final-SNAPSHOT-v2.jar
Still no success. There is no change in server logs. Maybe there should be modified module.xml for netty module. But i don't really see into this.
Attachment: Added: netty-3.4.1.Final-SNAPSHOT-v3.jar
My fault... I uploaded the wrong jar. Please try again with v3, this should work!
Attachment: Added: server1_trace.log Attachment: Added: server2_trace.log
Still no change. I've setup trace logging for org.jboss.netty and attached new logs server*_trace.log. Hopefullty it will help a little bit.
Are you sure its still the same as it now says: "Unable to instance LinkedTransferQueue, fallback to LegacyLinkedTransferQueue: java.lang.NoClassDefFoundError: Could not initialize class org.jboss.netty.util.internal.LinkedTransferQueue at org.jboss.netty.util.internal.QueueFactory.createQueue(QueueFactory.java:47) [netty-3.2.6.Final.jar:]" So it just failback to LegacyLinkedTransferQueue and thats it. The logging is done on debug level in netty and is no "error". From the logs it looks ok.
You're right. I didn't check the logs properly. I'll try to send some messages.
Attachment: Added: server1-v3.log Attachment: Added: server2-v3.log
I think that cluster was not created. I'm missing a log message: {code} 15:43:57,371 INFO [org.hornetq.core.server.cluster.impl.BridgeImpl] (Thread-3 (HornetQ-server-HornetQServerImpl::serverUUID=7c2f0485-87fc- ... NettyConnectorFactory?port=5445&host=fe80:0:0:0:20b:dbff:fe92:c9dd%2], discoveryGroupConfiguration=null]] is connected {code} But there are lots of messages trying to create cluster: {code} 07:26:37,567 DEBUG [org.hornetq.core.server.cluster.impl.BridgeImpl] (Thread-9 (HornetQ-server-HornetQServerImpl::serverUUID=19224aa3-887e-11e1-8673-000bdb92c9dd-22615283)) Scheduling retry for bridge sf.my-cluster.132f505a-887e-11e1-ae1e-001d0901cc36 in 500 milliseconds 07:26:38,073 DEBUG [org.hornetq.core.server.cluster.impl.BridgeImpl] (Thread-13 (HornetQ-server-HornetQServerImpl::serverUUID=19224aa3-887e-11e1-8673-000bdb92c9dd-22615283)) Connecting ClusterConnectionBridge@2efae4 [name=sf.my-cluster.132f505a-887e-11e1-ae1e-001d0901cc36, queue=QueueImpl[name=sf.my-cluster.132f505a-887e-11e1-ae1e-001d0901cc36, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=19224aa3-887e-11e1-8673-000bdb92c9dd]]@59aa86 targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@2efae4 [name=sf.my-cluster.132f505a-887e-11e1-ae1e-001d0901cc36, queue=QueueImpl[name=sf.my-cluster.132f505a-887e-11e1-ae1e-001d0901cc36, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=19224aa3-887e-11e1-8673-000bdb92c9dd]]@59aa86 targetConnector=ServerLocatorImpl [initialConnectors=[org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=fe80:0:0:0:21d:9ff:fe01:cc36%2], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@3927400[nodeUUID=19224aa3-887e-11e1-8673-000bdb92c9dd, connector=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=fe80:0:0:0:20b:dbff:fe92:c9dd%2, address=jms, server=HornetQServerImpl::serverUUID=19224aa3-887e-11e1-8673-000bdb92c9dd])) [initialConnectors=[org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=fe80:0:0:0:21d:9ff:fe01:cc36%2], discoveryGroupConfiguration=null]] to its destination [19224aa3-887e-11e1-8673-000bdb92c9dd], csf=null {code} I think we moved forward. :-) But again no idea where the problem is now.
Attached new logs: server1-v3.log server2-v3.log
Attachment: Added: netty-3.4.1.Final-SNAPSHOT-v4.jar
Ok... last but not least try v4.
Still no success. Is there something I could look at? Attached new logs with trace logging for netty: server1-v4.log server2-v4.log
Attachment: Added: server1-v4.log Attachment: Added: server2-v4.log
Attachment: Added: netty-3.4.1.Final-SNAPSHOT-v5.jar
I have now remove all the tweaks I made and it now use exactly the same code as the 3.2.6.Final-ipv6-fix.jar for the ipv6 stuff. If it not work with v5 then its another issue
Still no success. I think I'll write a reproducer so it can be debugged. I'll have no time for this until Friday because I need to finish some tasks. Attached new logs with trace logging for netty: server1-v5.log server2-v5.log
Attachment: Added: server1-v5.log Attachment: Added: server2-v5.log
@Norman Thanks a lot for your effort which you gave into this. At least we've hq cluster working which is good enough for test development.
@Miroslav: Yeah a reproducer would help a lot
@Clebert do you want a 3.2.8.Final as it seems to work ? Or is there not urgent and we can try to figure out why 3.4.x does not work ?
Attachment: Added: netty-3.2.8.Final-SNAPSHOT.jar
@Miroslav I also attached a 3.2.8.Final-SNAPSHOT which would be the one I can release out of the 3.2.x series. Its just 3.2.7.Final + ipv6-fix. Would be really nice if you can test with this also.
It seems that all newer versions of netty does not work. Attached new log: server1-3.2.8.log server2-3.2.8.log
Attachment: Added: server1-3.2.8.log Attachment: Added: server2-3.2.8.log
Thats very interesting... I will go through the commit log and see if I can find out why this is the case. One thing I noticed is that I compiled it via jdk7 but that should not matter as the runtime is still java 5 compatible
Attachment: Added: netty-3.2.8.Final-SNAPSHOT-v2.jar
After double-checking the patched 3.2.8 jar I noticed that I missed to patch the OIO transport which is the one that HornetQ uses. This explains why it not worked. Could you please retest with netty-3.2.8.Final-SNAPSHOT-v2.jar ?
Cool, the last jar netty-3.2.8.Final-SNAPSHOT-v2.jar works perfectly. :-) Cluster was created and messages redistributed from server 1 to 2.
Nice! Now I only need to find out why 3.4.1 does not work with the same patch applied..
I've prepared a reproducer (reproducer.zip). Prerequisite: Two machines with Ipv6 link-local addresses. Optional: To have mounted NFS shared directory on both machine so steps below doesn't have to be done twice. {code} Reproducer: 1. Download reproducer.zip and unzip 2. Build AS7 master and copy built jboss-as-7.1.2.Final-SNAPSHOT to the unzipped directory - "reproducer" 3. Run "sh prepare.sh" - creates two directories server1 and server2 - copy jboss-as-7.1.2.Final-SNAPSHOT into them - replaces standalone-full-ha.xml, application-users.properties, application-roles.properties - takes netty-3.2.8.Final-SNAPSHOT-v2.jar (from current working direcotry) and replaces netty-3.2.6.Final.jar in server1 and server2 4. Start server 1 - for by "sh start-server1.sh [fe80::21d:9ff:fe01:cc36%eth2]" (change ipv6 address) 5. Start server 2 - for example by "sh start-server2.sh [fe80::20b:dbff:fe92:c9dd%eth0]" (change ipv6 address) 6. When cluster is successfully created try to send and receive messages - change zone-id as necessary - sh start-producer.sh [fe80::20b:dbff:fe92:c9dd%eth0] - sh start-consumer.sh [fe80::21d:9ff:fe01:cc36%eth0] {code}
Attachment: Added: reproducer.zip
Attachment: Added: netty-3.4.1.Final-SNAPSHOT-v6.jar
@Miroslav could you do me a favor and try it one time again with netty-3.4.1.Final-SNAPSHOT-v6.jar. I have currently no test env with ipv6 here. You can also disable the use of sun.misc.Unsafe via a System property now so you don't see the debug messages of netty about the problem to instance LinkedTransferQueue. Just add -Dorg.jboss.netty.tryUnsafe=false to the startup script.
Hi, sorry for late reply. I've rebuild AS7 master because there is new tag of HQ(2.2.15) and had to do some configuration changes to get rid annoying exception. In the end I end up with StringIndexOutOfBoundsException which i think is netty issue. Unfortunatelly no success with cluster. {code} 08:42:13,263 ERROR [org.hornetq.core.remoting.impl.netty.NettyConnector] (Thread-3 (HornetQ-server-HornetQServerImpl::serverUUID=cfc92a49-8a1c-11e1-a598-cb4be654d686-24973910)) Failed to create netty connection: java.lang.StringIndexOutOfBoundsException: String index out of range: -1 at java.lang.String.substring(String.java:1937) [rt.jar:1.6.0_30] at org.jboss.netty.util.internal.SocketUtil.stripZoneId(SocketUtil.java:56) [netty-3.2.6.Final.jar:] at org.jboss.netty.channel.socket.oio.OioClientSocketPipelineSink.connect(OioClientSocketPipelineSink.java:107) [netty-3.2.6.Final.jar:] at org.jboss.netty.channel.socket.oio.OioClientSocketPipelineSink.eventSunk(OioClientSocketPipelineSink.java:67) [netty-3.2.6.Final.jar:] at org.jboss.netty.channel.Channels.connect(Channels.java:642) [netty-3.2.6.Final.jar:] at org.jboss.netty.channel.AbstractChannel.connect(AbstractChannel.java:204) [netty-3.2.6.Final.jar:] at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:230) [netty-3.2.6.Final.jar:] at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:183) [netty-3.2.6.Final.jar:] at org.hornetq.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:499) [hornetq-core-2.2.15.Final.jar:2.2.15.Final (HQ_2_2_15_FINAL, 122)] at org.hornetq.core.client.impl.ClientSessionFactoryImpl.getConnection(ClientSessionFactoryImpl.java:1144) [hornetq-core-2.2.15.Final.jar:2.2.15.Final (HQ_2_2_15_FINAL, 122)] at org.hornetq.core.client.impl.ClientSessionFactoryImpl.getConnectionWithRetry(ClientSessionFactoryImpl.java:994) [hornetq-core-2.2.15.Final.jar:2.2.15.Final (HQ_2_2_15_FINAL, 122)] at org.hornetq.core.client.impl.ClientSessionFactoryImpl.connect(ClientSessionFactoryImpl.java:225) [hornetq-core-2.2.15.Final.jar:2.2.15.Final (HQ_2_2_15_FINAL, 122)] at org.hornetq.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:668) [hornetq-core-2.2.15.Final.jar:2.2.15.Final (HQ_2_2_15_FINAL, 122)] at org.hornetq.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:624) [hornetq-core-2.2.15.Final.jar:2.2.15.Final (HQ_2_2_15_FINAL, 122)] at org.hornetq.core.server.cluster.impl.ClusterConnectionBridge.createSessionFactory(ClusterConnectionBridge.java:152) [hornetq-core-2.2.15.Final.jar:2.2.15.Final (HQ_2_2_15_FINAL, 122)] at org.hornetq.core.server.cluster.impl.BridgeImpl.connect(BridgeImpl.java:729) [hornetq-core-2.2.15.Final.jar:2.2.15.Final (HQ_2_2_15_FINAL, 122)] at org.hornetq.core.server.cluster.impl.BridgeImpl$ConnectRunnable.run(BridgeImpl.java:1005) [hornetq-core-2.2.15.Final.jar:2.2.15.Final (HQ_2_2_15_FINAL, 122)] at org.hornetq.utils.OrderedExecutorFactory$OrderedExecutor$1.run(OrderedExecutorFactory.java:100) [hornetq-core-2.2.15.Final.jar:2.2.15.Final (HQ_2_2_15_FINAL, 122)] at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [rt.jar:1.6.0_30] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [rt.jar:1.6.0_30] at java.lang.Thread.run(Thread.java:662) [rt.jar:1.6.0_30] {code} Attaching logs: server1-3.4.1-v6.log server2-3.4.1-v6.log
Attachment: Added: server1-3.4.1-v4.log Attachment: Added: server2-3.4.1-v4.log
Attachment: Added: server1-3.4.1-v6.log Attachment: Added: server2-3.4.1-v6.log
Attachment: Removed: netty-3.2.8.Final-SNAPSHOT.jar
Attachment: Removed: netty-3.4.1.Final-SNAPSHOT-v2.jar
Attachment: Removed: netty-3.4.1.Final-SNAPSHOT.jar
Attachment: Removed: netty-3.4.1.Final-SNAPSHOT-v5.jar
Attachment: Removed: netty-3.4.1.Final-SNAPSHOT-v4.jar
Attachment: Removed: netty-3.4.1.Final-SNAPSHOT-v3.jar
Attachment: Removed: server1-3.2.8.log
Attachment: Removed: server1-3.4.1-v4.log
Attachment: Removed: server1-v3.log
Attachment: Removed: server1.log
Attachment: Removed: server2_trace.log
Attachment: Removed: server2.log
Attachment: Removed: server2-v5.log
Attachment: Removed: server2-v4.log
Attachment: Removed: server2-v3.log
Attachment: Removed: server2-3.4.1-v4.log
Attachment: Removed: server2-3.2.8.log
Attachment: Removed: server1_trace.log
Attachment: Removed: server1-v5.log
Attachment: Removed: server1-v4.log
Cleaning attachments - sorry for spam.
The problem may happen because HornetQ 2.2.15 contains HORNETQ-907 which was added so that we wouldn't be required to move Netty versions. Of course, we ultimately want the fix to be in Netty, but it shouldn't break if the zone/scope id has already been stripped off the address (if indeed that is the problem).
@justin yes thats the problem. Let me make sure we handle this in netty and upload a new jar
Attachment: Added: netty-3.4.1.Final-SNAPSHOT-v7.jar
@Miroslav, @Justin please test the v7 version. It contains a fix which will make sure that netty also handles the address if it contains no zoneId
Still no success. But the StringIndexOutOfBound exception is gone. New logs attached: server1-3.4.1-v7.log server2-3.4.1-v7.log
Attachment: Added: server1-3.4.1-v7.log Attachment: Added: server2-3.4.1-v7.log
Then its something unrelated to it at all. I will try to setup some test env to debug it futher. Don't waste any more resources to it. I just don't know how fast I will be able todo so.
It seems that previous versions of netty which worked with HQ 2.2.13 does not work now with 2.2.15 - there is the StringIndexOutOfBound exception.
Ok, just adding a command how to create ipv6 link local address on linux: {code}ip -6 address add fe80::210:18ff:fe2b:91c1/64 dev eth0{code} Edit: And attaching configuration files for last AS7 master - standalone-full-ha-1.xml, standalone-full-ha-2.xml
@Miroslav this makes sense as the 3.2.8.Final-SNAPSHOT does not handle the case of ipv6 linklocal addresses that have no zone id / scope id. Let me upload you a new one
Attachment: Added: standalone-full-ha-1.xml Attachment: Added: standalone-full-ha-2.xml
Attachment: Added: netty-3.2.8.Final-SNAPSHOT-v3.jar
@Miroslav attached 3.2.8.Final-SNAPSHOT-v3.jar which should fix the Exception.
@Norman Cool, last jar 3.2.8.Final-SNAPSHOT-v3.jar is working with HQ 2.2.15 :-)
Ok now I only need to get a clue why 3.4.x is not working.
Just giving an update. I tried netty-3.2.8.Final-SNAPSHOT-v3.jar (and HQ 2.2.15) with following use cases: HornetQ IPv6 - connect with remote JMS client HornetQ IPv6 - cluster (auto adv + static configuration) HornetQ IPv6 - bridges HornetQ IPv6 - HA (simple dedicated topology and failover with standalone jms client) HornetQ IPv6 - remote JCA (MDB using resource adapter connected to remote AS7 server) All of them are ok. I'm automating tests for it now and will try with IPv6 global and mapped addresses.
@Miroslav thanks for the feedback. I hope to get time tomorrow to test futher netty 3.4.x with hornetq
There is one thing we've just noticed. We can't attach debugger to IPv6 address like: {code}JAVA_OPTS="$JAVA_OPTS -Xrunjdwp:transport=dt_socket,address=[fe80::210:18ff:fe2b:91c1]:8787,server=y,suspend=n"{code} Problem is on oracle side: http://docs.oracle.com/javase/6/docs/technotes/guides/jpda/conninv.html - ".... The current implementation on the target VM side only supports IPv4..."
I was able to find the commit that broke netty. I'm currently investigate howto fix it. Stay tuned
Attachment: Added: netty-3.4.2.Final-SNAPSHOT.jar
@Miroslav I just uploaded netty-3.4.2.Final-SNAPSHOT.jar which should fix your problems. At least the reproducer does not show the error anymore. Could you please verify that everything is ok ? I plan to release Netty 3.4.2.Final on monday if everything works out and you confirm that the bug is fixed for you. Thanks for all your help!
Brilliant! :-) netty-3.4.2.Final-SNAPSHOT.jar works perfectly with HQ 2.2.15. I'm just building AS7 master because it seems that HQ was updated. If it works too then I believe we're done :-)
@Norman netty-3.4.2.Final-SNAPSHOT.jar is also working with HQ 2.2.16 which is current version of HQ in AS7 master. I think that netty can be safely updated. :-) Thanks a lot for your hard work.
Nice! Ok then stay tuned for the release :)
Netty 3.4.2.Final is out and also on maven central. See http://netty.io/blog/2012/04/27/ .
What version is targeted to be in AS7.1.2.Final tag? There are other project dependencies on netty so going to 3.4.2 this late may be a risk. Component code freeze is today.
Link: Added: This issue relates to ISPN-2015
I have bad news. I have tested the workaround this patch uses against a few linux distros and can confirm that it only appears to work for older distributions. It also will break MacOSX, since it requires the connect address to contain the % zone specifier. I recommend reverting the workaround (or at least selectively enabling it for just Linux, and letting users know not all distros are covered).
To reproduce run the following test code: {code} public class Test { public static void main(String[] args) throws Throwable { InetAddress byName = InetAddress.getByName(args[0]); InetAddress other = InetAddress.getByName(args[1]); Socket socket = new Socket(byName, 8000, other, 0); } } {code} Like so, but using the link-local address from your interface: {code} java Test fe80::224:d7ff:fe8d:59ec fe80::224:d7ff:fe8d:59ec%eth0 {code} If it does not work you will get "java.net.SocketException: Invalid arguement or cannot assign requested address". If it does work you will get "connection refused"
Like I said in the netty issue tracker, I think the best we can do is to make it configurable via System Property.
Maybe it does not worth all the efforts and we should just tell users that they need to use java7 for using link-local-addresses (ipv6). Everything else may just confuse the hell out of them, as it will only work on a few os'es etc.
Just for the record.. I pushed a release of netty-3.4.4.Final which removes the workaround. Maybe you want to test with this and include it on next release.
JIRA triage: not a blocker for GA. Moving to TBD EAP 6
This issue has been triaged and decided to not block/prevent/hold the EAP 6 release. To comply with Release Criteria stating that no issues with Critical or Blocker priority setting can be open for the release, this is changed to priority Major.
Link: Removed: This issue relates to JBPAPP-7470
Link: Added: This issue is a dependency of JBPAPP-9320
Release Notes Docs Status: Added: Documented as Known Issue Release Notes Text: Added: Due to a JDK bug, if you use link local addresses, zone IDs are used to distinguish which interface is chosen. This problem does not affect global addresses. A workaround will be included in a future version of the Netty component. Affects: Added: Release Notes
Writer: Added: mistysj
Release Notes Docs Status: Removed: Documented as Known Issue Writer: Removed: mistysj Release Notes Text: Removed: Due to a JDK bug, if you use link local addresses, zone IDs are used to distinguish which interface is chosen. This problem does not affect global addresses. A workaround will be included in a future version of the Netty component. Docs QE Status: Removed: NEW
Hi, can i ask is netty-3.2.6.Final-ipv6-fix.jar support Ipv6? If yes, how to i call the method?
hi, May i also ask which method file have be edit to make IPv6? From Swift
Hi, Anyone, can help me? Can I ask the org.jboss.netty.channel.Channel support getRemoteAddress support IPv6? Hope there is someone can assist me with that =) From Swift
Swift, I'm not exactly sure what you're asking about, but whatever it is this bug report is almost certainly the wrong place. Open a new thread on the HornetQ or AS7 community forum or leverage your Red Hat support subscription if you have one.
I don't think this is an issue any longer...
I will set this as modified.. If it's still an issue please assign us back.
Marking this for 6.1.1 to be verified by QA
I tested this with EAP 6.1.1.ER1 on RHEL 5,6 and Windows Server 2008. All without problem. I'll set this bz as verified. Let's create a new bz if problem on some other platform occur. Thank you all guys for looking at this.
If this bug has been verified as no longer presenting, then the "Known Issue" 'Doc Type' above is incorrect, as is the Release Notes 'Doc Text'. To bring the release note into line with Development, Docs needs to know what happened with the resolution. What was done/changed to resolve this issue and what is the product's performance now because of that resolution?
Hi Scott, I know very little about details. The issue was addressed by upgrading Netty to higher version 3.6.6 in EAP 6.1.1. Is it ok for release notes in this way? Cheers, Mirek