Bug 745888 (EDG-12) - JGroups not binding to specific default interface
Summary: JGroups not binding to specific default interface
Keywords:
Status: CLOSED NEXTRELEASE
Alias: EDG-12
Product: JBoss Data Grid 6
Classification: JBoss
Component: unspecified
Version: 6.0.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 6.0.0
Assignee: Default User
QA Contact:
URL: http://jira.jboss.org/jira/browse/EDG-12
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-06-03 15:54 UTC by Richard Achmatowicz
Modified: 2012-08-15 16:47 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-07-12 16:53:33 UTC
Type: Feature Request


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker EDG-12 0 Major Closed JGroups not binding to specific default interface 2017-09-21 11:50:34 UTC

Description Richard Achmatowicz 2011-06-03 15:54:02 UTC
project_key: EDG

When EDG starts up, the JGroups subsystem does not bind to the specified default interface.

Steps to reproduce:
1. Unzip the EDG distribution
2. Edit $EDG_DIST/standalone/standalone.xml to set the default address to 192.168.0.100 (or your local non-loopback interface)
3. edit $EDG_DIST/bin/standalone.conf to add -Djava.net.preferIPv4Stack=true to JAVA_OPTS to force JVM to use IPv4 addresses
4. ./standalone.sh

The log shows:
[nrla@lenovo bin]$ ./standalone.sh 
=========================================================================

  JBoss Bootstrap Environment

  JBOSS_HOME: /tmp/jboss-datagrid-6.0.0.Alpha1-SNAPSHOT2

  JAVA: /opt/jdk1.6.0_22/bin/java

  JAVA_OPTS: -server -Xms64m -Xmx512m -XX:MaxPermSize=256m -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000

=========================================================================

11:03:58,678 INFO  [org.jboss.modules] JBoss Modules version 1.0.0.Beta17
11:03:58,981 INFO  [org.jboss.msc] JBoss MSC version 1.0.0.Beta8
11:03:59,049 INFO  [org.jboss.as] JBoss AS 7.0.0.Beta3 "Salyut" starting
11:03:59,732 INFO  [org.jboss.as.server] Activating core services
11:03:59,880 INFO  [org.jboss.as] creating native management service using network interface (default) port (9999)
11:03:59,889 INFO  [org.jboss.as] creating http management service using network interface (default) port (9990)
11:03:59,914 INFO  [org.jboss.as.arquillian] Activating Arquillian Subsystem
11:03:59,923 INFO  [org.jboss.as.ee] Activating EE subsystem
11:03:59,983 INFO  [org.jboss.as.naming] Activating Naming Subsystem
11:04:00,186 INFO  [org.jboss.as.connector.subsystems.datasources] Deploying JDBC-compliant driver class org.h2.Driver (version 1.2)
11:04:00,202 INFO  [org.jboss.as.osgi] Activating OSGi Subsystem
11:04:00,729 INFO  [org.jboss.as.webservices] Activating WebServices Extension
11:04:01,082 INFO  [org.jboss.as.logging] Removing bootstrap log handlers
11:04:01,252 INFO  [org.jboss.remoting] (MSC service thread 1-3) JBoss Remoting version 3.1.0.Beta2
11:04:01,266 INFO  [org.apache.catalina.core.AprLifecycleListener] (MSC service thread 1-2) The Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /opt/jdk1.6.0_22/jre/lib/amd64/server:/opt/jdk1.6.0_22/jre/lib/amd64:/opt/jdk1.6.0_22/jre/../lib/amd64:/usr/X11R6/lib:/home/nrla/java/jni::/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
11:04:01,311 INFO  [org.jboss.as.jmx.JMXConnectorService] (MSC service thread 1-4) Starting remote JMX connector
11:04:01,348 WARN  [org.jboss.osgi.framework.internal.URLHandlerPlugin] (MSC service thread 1-2) Unable to set the URLStreamHandlerFactory
11:04:01,349 WARN  [org.jboss.osgi.framework.internal.URLHandlerPlugin] (MSC service thread 1-2) Unable to set the ContentHandlerFactory
11:04:01,689 INFO  [org.hornetq.core.server.impl.HornetQServerImpl] (MSC service thread 1-2) live server is starting..
11:04:01,745 INFO  [org.jboss.as.connector] (MSC service thread 1-4) Starting JCA Subsystem (JBoss IronJacamar 1.0.0.Beta5)
11:04:01,755 INFO  [org.apache.coyote.http11.Http11Protocol] (MSC service thread 1-3) Starting Coyote HTTP/1.1 on http-test2-192.168.0.101-8080
11:04:01,784 INFO  [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-4) Bound JDBC Data-source [java:/H2DS]
11:04:02,057 INFO  [org.hornetq.core.remoting.impl.netty.NettyAcceptor] (MSC service thread 1-2) Started Netty Acceptor version 3.2.1.Final-r2319 test2:5455 for CORE protocol
11:04:02,066 INFO  [org.hornetq.core.remoting.impl.netty.NettyAcceptor] (MSC service thread 1-2) Started Netty Acceptor version 3.2.1.Final-r2319 test2:5445 for CORE protocol
11:04:02,078 INFO  [org.hornetq.core.server.impl.HornetQServerImpl] (MSC service thread 1-2) HornetQ Server version 2.1.2.Final (Colmeia, 120) started
11:04:03,184 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-4) ISPN00078: Starting JGroups Channel
11:04:03,259 INFO  [org.jgroups.JChannel] (MSC service thread 1-4) JGroups version: 2.12.0.Final
11:04:03,608 WARNING [org.jgroups.protocols.UDP] (MSC service thread 1-4) send buffer of socket java.net.DatagramSocket@65685e30 was set to 640KB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
11:04:03,609 WARNING [org.jgroups.protocols.UDP] (MSC service thread 1-4) receive buffer of socket java.net.DatagramSocket@65685e30 was set to 20MB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
11:04:03,609 WARNING [org.jgroups.protocols.UDP] (MSC service thread 1-4) send buffer of socket java.net.MulticastSocket@26ffd553 was set to 640KB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
11:04:03,610 WARNING [org.jgroups.protocols.UDP] (MSC service thread 1-4) receive buffer of socket java.net.MulticastSocket@26ffd553 was set to 25MB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
11:04:03,614 INFO  [stdout] (MSC service thread 1-4) 
11:04:03,615 INFO  [stdout] (MSC service thread 1-4) -------------------------------------------------------------------
11:04:03,615 INFO  [stdout] (MSC service thread 1-4) GMS: address=lenovo-12417, cluster=DataGridPartition, physical address=fe80:0:0:0:215:58ff:fec8:9d0c:55200
11:04:03,615 INFO  [stdout] (MSC service thread 1-4) -------------------------------------------------------------------
11:04:05,637 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-4) ISPN00094: Received new cluster view: [lenovo-12417|0] [lenovo-12417]
11:04:05,641 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-4) ISPN00079: Cache local address is lenovo-12417, physical addresses are [fe80:0:0:0:215:58ff:fec8:9d0c:55200]
11:04:05,641 INFO  [org.infinispan.factories.GlobalComponentRegistry] (MSC service thread 1-4) ISPN00128: Infinispan version: Infinispan 'Pagoa' 5.0.0-SNAPSHOT
11:04:05,761 INFO  [org.infinispan.jmx.CacheJmxRegistration] (MSC service thread 1-4) ISPN00031: MBeans were successfully registered to the platform mbean server.
11:04:05,771 INFO  [org.infinispan.factories.ComponentRegistry] (MSC service thread 1-4) ISPN00128: Infinispan version: Infinispan 'Pagoa' 5.0.0-SNAPSHOT
11:04:05,849 INFO  [org.infinispan.jmx.CacheJmxRegistration] (MSC service thread 1-3) ISPN00031: MBeans were successfully registered to the platform mbean server.
11:04:05,849 INFO  [org.infinispan.factories.ComponentRegistry] (MSC service thread 1-3) ISPN00128: Infinispan version: Infinispan 'Pagoa' 5.0.0-SNAPSHOT
11:04:05,935 INFO  [org.infinispan.jmx.CacheJmxRegistration] (MSC service thread 1-3) ISPN00031: MBeans were successfully registered to the platform mbean server.
11:04:05,958 INFO  [org.infinispan.factories.ComponentRegistry] (MSC service thread 1-3) ISPN00128: Infinispan version: Infinispan 'Pagoa' 5.0.0-SNAPSHOT
11:04:06,033 INFO  [org.infinispan.jmx.CacheJmxRegistration] (MSC service thread 1-3) ISPN00031: MBeans were successfully registered to the platform mbean server.
11:04:06,034 INFO  [org.infinispan.factories.ComponentRegistry] (MSC service thread 1-3) ISPN00128: Infinispan version: Infinispan 'Pagoa' 5.0.0-SNAPSHOT
11:04:06,096 INFO  [org.jboss.as] (MSC service thread 1-3) JBoss AS 7.0.0.Beta3 "Salyut" started in 7682ms - Started 106 of 153 services (47 services are passive or on-demand)
^C11:04:16,803 INFO  [org.jboss.as.logging] Restored bootstrap log handlers
11:04:16,829 INFO  [org.jboss.as.connector.subsystems.datasources] Removed JDBC Data-source [java:/H2DS]
11:04:16,836 INFO  [com.arjuna.ats.jbossatx] ARJUNA-32018 Destroying TransactionManagerService
11:04:16,851 INFO  [com.arjuna.ats.jbossatx] ARJUNA-32014 Stopping transaction recovery manager
11:04:16,851 INFO  [org.hornetq.core.server.impl.HornetQServerImpl] HornetQ Server version 2.1.2.Final (Colmeia, 120) stopped
11:04:16,952 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] ISPN00080: Disconnecting and closing JGroups Channel
11:04:16,975 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] ISPN00082: Stopping the RpcDispatcher
11:04:16,976 INFO  [org.jboss.as] JBoss AS 7.0.0.Beta3 "Salyut" stopped in 192ms

Here, Coyote uses 192.168.0.100; JGroups uses the same interface, but it's IPv6 address.

 
[nrla@lenovo ~]$ /sbin/ifconfig
eth1      Link encap:Ethernet  HWaddr 00:15:58:C8:9D:0C  
          inet addr:192.168.0.100  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::215:58ff:fec8:9d0c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:370899 errors:0 dropped:0 overruns:0 frame:0
          TX packets:287111 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:469415199 (447.6 MiB)  TX bytes:35168625 (33.5 MiB)
          Interrupt:16 Memory:ee000000-ee020000 

eth1:1    Link encap:Ethernet  HWaddr 00:15:58:C8:9D:0C  
          inet addr:192.168.0.101  Bcast:192.168.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          Interrupt:16 Memory:ee000000-ee020000 

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:16321 errors:0 dropped:0 overruns:0 frame:0
          TX packets:16321 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:5521480 (5.2 MiB)  TX bytes:5521480 (5.2 MiB)

tun0      Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  
          inet addr:10.3.236.113  P-t-P:10.3.236.113  Mask:255.255.240.0
          UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1412  Metric:1
          RX packets:40333 errors:0 dropped:0 overruns:0 frame:0
          TX packets:33658 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500 
          RX bytes:41002082 (39.1 MiB)  TX bytes:3125724 (2.9 MiB)

Comment 1 Richard Achmatowicz 2011-06-03 15:54:50 UTC
Assigning to you Paul so you may comment as per our IRC chat.


Comment 2 Paul Ferraro 2011-06-03 17:13:00 UTC
So, there are a few issues I'd like to raise:
1. The new grid subsystems violate the domain schema requirements, since they defer to external config files.
2. The urn:jboss:domain:datagrid:cachemanager:1.0 subsystem isn't necessary.  Instead the data grid's cache manager configurations should be defined within the existing urn:jboss:domain:infinispan subsystem.  This way, the integration with the existing urn:jboss:domain:jgroups subsystem comes for free, including the appropriate socket-binding logic that would avoid the above issue.
3. The urn:jboss:domain:datagrid:endpoint:* subsystems should externalize their socket bindings and reference them by name within their subsystem configuration.

Comment 3 Richard Achmatowicz 2011-06-03 17:17:49 UTC
Re-assigning to Trustin.

Comment 4 Trustin Lee 2011-06-04 01:45:05 UTC
Hi Paul,

1) the current subsystem configuration is primarily intended to enable QE folks to get started with AS 7 based Infinispan as early as possible.  If there is a requirement we should follow it.

2) Does urn:jboss:domain:infinispan cover Infinispan's own configuration schema completely?  Because EDG 6 will have different release schedule from AS, I think it might be better to keep it separate to catch up the changes in Infinispan configuration as Infinispan evolves.

3) This should be fixed together with (1).

Comment 5 Paul Ferraro 2011-06-08 14:44:42 UTC
1) Understood.  See here for domain schema requirements:
http://community.jboss.org/wiki/DomainRequirements

2) It covers *most* of it.  The modifications can be largely summarized as:
* More concise
* Prune stuff that the AS handles internally.  e.g. everything in <global>.  Things like channel, thread pools, mbean server, transaction manager, are all injected from their respective subsystems.
* Prune obscure configuration options (e.g. multiple cache loaders)
* Only expose configuration options that are relevant for a given cache mode.  e.g. don't expose rehash options for non-distributed caches
* Provides sensible per cache mode custom default values
* Combine multiple dependent options wherever possible (e.g. combine multiple booleans into single enum)
The schema can be found here:
https://github.com/jbossas/jboss-as/blob/master/clustering/src/main/resources/schema/jboss-infinispan.xsd
Are there configuration options that you see missing?
While the AS7 and EDG release schedules will vary, the EDG/EAP6 release schedules probably won't vary much.  One of the domain schema requirements is that the domain schema should be *stable*.  So, there *shouldn't* be any schema changes across minor releases.  Changes across major releases are handled by versioning the schema itself.

Comment 6 Richard Achmatowicz 2011-06-09 15:33:43 UTC
With the build of June 9, 2001:

If I add in a -Djava.net.preferIPv4Stack=true into standalone.conf, I now see that JGroups picks up an IPv4 address, but it has chosen the first non-loopback address on the host i'm running on, and not the address I specify in the default interface:

[nrla@lenovo bin]$ ./standalone.sh 
=========================================================================

  JBoss Bootstrap Environment

  JBOSS_HOME: /tmp/jboss-datagrid-6.0.0.Alpha1-SNAPSHOT2

  JAVA: /usr/bin/java

  JAVA_OPTS: -server -Djava.net.preferIPv4Stack=true -Xms64m -Xmx512m -XX:MaxPermSize=256m -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000

=========================================================================

11:31:48,020 INFO  [org.jboss.modules] JBoss Modules version 1.0.0.Beta17
11:31:48,289 INFO  [org.jboss.msc] JBoss MSC version 1.0.0.Beta8
11:31:48,347 INFO  [org.jboss.as] JBoss AS 7.0.0.Beta3 "Salyut" starting
11:31:49,060 INFO  [org.jboss.as.server] Activating core services
11:31:49,202 INFO  [org.jboss.as] creating native management service using network interface (default) port (9999)
11:31:49,211 INFO  [org.jboss.as] creating http management service using network interface (default) port (9990)
11:31:49,234 INFO  [org.jboss.as.arquillian] Activating Arquillian Subsystem
11:31:49,243 INFO  [org.jboss.as.ee] Activating EE subsystem
11:31:49,304 INFO  [org.jboss.as.naming] Activating Naming Subsystem
11:31:49,502 INFO  [org.jboss.as.connector.subsystems.datasources] Deploying JDBC-compliant driver class org.h2.Driver (version 1.2)
11:31:49,516 INFO  [org.jboss.as.osgi] Activating OSGi Subsystem
11:31:49,988 INFO  [org.jboss.as.webservices] Activating WebServices Extension
11:31:50,320 INFO  [org.jboss.as.logging] Removing bootstrap log handlers
11:31:50,404 INFO  [org.jboss.as.deployment] (MSC service thread 1-4) Started FileSystemDeploymentService for directory /tmp/jboss-datagrid-6.0.0.Alpha1-SNAPSHOT2/standalone/deployments
11:31:50,467 INFO  [org.jboss.wsf.common.management.AbstractServerConfig] (MSC service thread 1-2) JBoss Web Services - Stack CXF Server 4.0.0.Alpha4
11:31:50,545 INFO  [org.jboss.remoting] (MSC service thread 1-2) JBoss Remoting version 3.1.0.Beta2
11:31:50,548 INFO  [org.apache.catalina.core.AprLifecycleListener] (MSC service thread 1-1) The Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib/amd64/server:/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib/amd64:/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/../lib/amd64:/usr/X11R6/lib:/home/nrla/java/jni::/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
11:31:50,594 INFO  [org.jboss.as.jmx.JMXConnectorService] (MSC service thread 1-1) Starting remote JMX connector
11:31:50,623 WARN  [org.jboss.osgi.framework.internal.URLHandlerPlugin] (MSC service thread 1-3) Unable to set the URLStreamHandlerFactory
11:31:50,624 WARN  [org.jboss.osgi.framework.internal.URLHandlerPlugin] (MSC service thread 1-3) Unable to set the ContentHandlerFactory
11:31:50,897 INFO  [org.hornetq.core.server.impl.HornetQServerImpl] (MSC service thread 1-1) live server is starting..
11:31:50,995 INFO  [org.apache.coyote.http11.Http11Protocol] (MSC service thread 1-2) Starting Coyote HTTP/1.1 on http-test2-192.168.0.101-8080
11:31:50,997 INFO  [org.jboss.as.connector] (MSC service thread 1-2) Starting JCA Subsystem (JBoss IronJacamar 1.0.0.Beta5)
11:31:51,046 INFO  [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-3) Bound JDBC Data-source [java:/H2DS]
11:31:51,280 INFO  [org.hornetq.core.remoting.impl.netty.NettyAcceptor] (MSC service thread 1-1) Started Netty Acceptor version 3.2.1.Final-r2319 test2:5445 for CORE protocol
11:31:51,308 INFO  [org.hornetq.core.remoting.impl.netty.NettyAcceptor] (MSC service thread 1-1) Started Netty Acceptor version 3.2.1.Final-r2319 test2:5455 for CORE protocol
11:31:51,315 INFO  [org.hornetq.core.server.impl.HornetQServerImpl] (MSC service thread 1-1) HornetQ Server version 2.1.2.Final (Colmeia, 120) started
11:31:52,427 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-3) ISPN00078: Starting JGroups Channel
11:31:52,501 INFO  [org.jgroups.JChannel] (MSC service thread 1-3) JGroups version: 2.12.0.Final
11:31:52,803 WARNING [org.jgroups.protocols.UDP] (MSC service thread 1-3) send buffer of socket java.net.DatagramSocket@3e58f124 was set to 640KB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
11:31:52,804 WARNING [org.jgroups.protocols.UDP] (MSC service thread 1-3) receive buffer of socket java.net.DatagramSocket@3e58f124 was set to 20MB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
11:31:52,804 WARNING [org.jgroups.protocols.UDP] (MSC service thread 1-3) send buffer of socket java.net.MulticastSocket@413f9276 was set to 640KB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
11:31:52,805 WARNING [org.jgroups.protocols.UDP] (MSC service thread 1-3) receive buffer of socket java.net.MulticastSocket@413f9276 was set to 25MB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
11:31:52,810 INFO  [stdout] (MSC service thread 1-3) 
11:31:52,810 INFO  [stdout] (MSC service thread 1-3) -------------------------------------------------------------------
11:31:52,810 INFO  [stdout] (MSC service thread 1-3) GMS: address=lenovo-65119, cluster=DataGridPartition, physical address=10.3.237.163:55200
11:31:52,810 INFO  [stdout] (MSC service thread 1-3) -------------------------------------------------------------------
11:31:54,827 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-3) ISPN00094: Received new cluster view: [lenovo-65119|0] [lenovo-65119]
11:31:54,903 INFO  [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-3) ISPN00079: Cache local address is lenovo-65119, physical addresses are [10.3.237.163:55200]
11:31:54,904 INFO  [org.infinispan.factories.GlobalComponentRegistry] (MSC service thread 1-3) ISPN00128: Infinispan version: Infinispan 'Pagoa' 5.0.0.CR4
11:31:55,004 INFO  [org.infinispan.jmx.CacheJmxRegistration] (MSC service thread 1-3) ISPN00031: MBeans were successfully registered to the platform mbean server.
11:31:55,011 INFO  [org.infinispan.factories.ComponentRegistry] (MSC service thread 1-3) ISPN00128: Infinispan version: Infinispan 'Pagoa' 5.0.0.CR4
11:31:55,133 INFO  [org.infinispan.jmx.CacheJmxRegistration] (MSC service thread 1-3) ISPN00031: MBeans were successfully registered to the platform mbean server.
11:31:55,134 INFO  [org.infinispan.factories.ComponentRegistry] (MSC service thread 1-3) ISPN00128: Infinispan version: Infinispan 'Pagoa' 5.0.0.CR4
11:31:55,222 INFO  [org.infinispan.jmx.CacheJmxRegistration] (MSC service thread 1-3) ISPN00031: MBeans were successfully registered to the platform mbean server.
11:31:55,273 INFO  [org.infinispan.factories.ComponentRegistry] (MSC service thread 1-3) ISPN00128: Infinispan version: Infinispan 'Pagoa' 5.0.0.CR4
11:31:55,367 INFO  [org.infinispan.jmx.CacheJmxRegistration] (MSC service thread 1-3) ISPN00031: MBeans were successfully registered to the platform mbean server.
11:31:55,367 INFO  [org.infinispan.factories.ComponentRegistry] (MSC service thread 1-3) ISPN00128: Infinispan version: Infinispan 'Pagoa' 5.0.0.CR4
11:31:55,435 INFO  [org.jboss.as] (MSC service thread 1-1) JBoss AS 7.0.0.Beta3 "Salyut" started in 7667ms - Started 106 of 153 services (47 services are passive or on-demand)

If I want JGroups to use the correct interface, I have to add in to standalone.xml the following property:

    <system-properties>
        <property name="foo" value="bar"/>
        <property name="key" value="value"/>
	<property name="jboss.jgroups.udp.bind_addr" value="192.168.0.100"/>
    </system-properties>


Comment 7 Richard Achmatowicz 2011-06-09 15:36:22 UTC
All in all, if I want to get the build to start up with a specific default interface (which I need to control a cluster), I have to add modify the build in the following ways:

(i) add in a -Djava.net.preferIPv4Stack=true
(ii) change the default interface in standalone.xml to 192.168.0.100
(iii) add in these system properties:
    <system-properties>
        <property name="foo" value="bar"/>
        <property name="key" value="value"/>
        <property name="jboss.infinispan.hotrod.server.host" value="192.168.0.100"/>
        <property name="jboss.infinispan.memcached.server.host" value="192.168.0.100"/>
	<property name="jboss.jgroups.udp.bind_addr" value="192.168.0.100"/>
    </system-properties>

This stuff should really work out of the box based on the default interface.
  


Comment 8 Michal Linhard 2011-06-22 22:03:38 UTC
This is problem also of infinispan itself (checked for 5.0.0.CR5)

try creating a virtual interface test1
and then ./startServer.sh -r hotrod -l test1 -c /path/to/infinispan.xml
(where infinispan.xml contains a distributed cache and jgroups udp config)

Comment 9 Michal Linhard 2011-06-22 22:08:08 UTC
btw setting jboss.jgroups.udp.bind_addr property doesn't help in my case neither for EDG6 nor for pure ISPN.

I'm still getting:
{code}
-------------------------------------------------------------------
00:07:00,766 INFO  [stdout] (MSC service thread 1-2) GMS: address=michal-linhard-50670, cluster=DataGridPartition, physical address=10.200.130.92:55200
00:07:00,766 INFO  [stdout] (MSC service thread 1-2) -------------------------------------------------------------------
{code}
where 10.200.130.92 is my physical address and should be 192.168.11.101 (test1)

Comment 10 Trustin Lee 2011-06-27 12:20:03 UTC
Some progress:

Previously, hot rod and memcached endpoint settings were read from the .properties file.  It led to duplication in configuration because AS 7 already had a centralized configuration for socket bindings.  In the latest SVN revision, this has been merged into standalone.xml so that there's no duplication at least for endpoints.

However, this doesn't solve the problem with JGroups.  I'm still working on replacing datagrid:cachemanager subsystem with the AS7's infinsispan subsystem, but it shouldn't take long.

Comment 11 Trustin Lee 2011-06-29 07:18:42 UTC
AS 7 CR1 has just been released.  Now we can upgrade to 7.0.0.CR1 and replace the datagrid cachemanager subsystem with AS7's infinispan subsystem.  Stay tuned!

Comment 12 Richard Achmatowicz 2011-06-29 09:14:40 UTC
Go, Trustin, go!

Comment 13 Trustin Lee 2011-07-08 04:53:41 UTC
I've just replaced the datagrid cachemanager with the one in AS 7 CR1.  Please svn up and let me know how it works.  You might have to modify standalone.xml to get things work as expected though because I only confirmed that it works in a single node scenario.  Please let me know your findings so that I can fix more.

Comment 14 Trustin Lee 2011-07-11 23:23:06 UTC
Michal reported that he succeeded to launch multiple EDG instances which bind to multiple interfaces in a single machine to form a cluster, with the latest build based on AS7 CR1.  We are waiting for the test passes in Hudson to make sure the cluster view is maintained when the cluster is spread across multiple machines.

Comment 15 Michal Linhard 2011-07-12 16:53:33 UTC
This issue is not present anymore with the new build.

Comment 16 Anne-Louise Tangring 2011-09-26 19:41:22 UTC
Docs QE Status: Removed: NEW 



Note You need to log in before you can comment on or make changes to this bug.