Bug 1035315 - Installer not configuring sramp repo for clustered environment in *-ha profiles
Summary: Installer not configuring sramp repo for clustered environment in *-ha profiles
Keywords:
Status: ASSIGNED
Alias: None
Product: JBoss Fuse Service Works 6
Classification: JBoss
Component: Installer
Version: 6.0.0
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: FUTURE
Assignee: Miroslav Sochurek
QA Contact: Len DiMaggio
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-11-27 14:25 UTC by Stefan Bunciak
Modified: 2023-05-15 19:53 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
The installer does not configure the S-RAMP repository for clustering environment in the *-ha.xml profile setting files. Because of this, the Infinispan cache container contains a local cached instead of a replicated cache. The Modeshape repository used by S-RAMP is not configured to be clustered either. The installer should configure the *-ha.xml profiles for clustering and one shared S-RAMP repository. This is due to a problem with the S-RAMP configuration.
Clone Of:
Environment:
Last Closed:
Type: Bug
Embargoed:


Attachments (Terms of Use)
Log of server1 (851.40 KB, text/x-log)
2014-06-20 09:08 UTC, Stefan Bunciak
no flags Details

Description Stefan Bunciak 2013-11-27 14:25:24 UTC
Description of problem:

Installer should configure 
* infinispan cache container to contain replicated cache instead of local cache
* modeshape repo used by sramp to be clustered

Version-Release number of selected component (if applicable):
* FSW 6.0.0.ER6 (Beta)

How reproducible:
* 100%

Steps to Reproduce:
1.
2.
3.

Actual results:

* Installer will configure modeshape repo for sramp to 

Expected results:

* Installer will configure all *-ha.xml profiles to be ready for clustering and use 1 shared sramp repo (to see the same content).

Additional info:

* I used following configuration (updated standalone-ha.xml):
* I'm no infinispan/modeshape expert so probably someone capable could suggest better configuration
** Infinispan subsystem:

<cache-container name="modeshape" module="org.modeshape"
	start="EAGER">
	<transport lock-timeout="60000" />
	<replicated-cache name="sramp" mode="SYNC" batching="true">
		<locking isolation="NONE" />
		<transaction mode="NON_XA" />
		<string-keyed-jdbc-store datasource="java:jboss/datasources/srampDS"
			passivation="false" purge="false">
			<string-keyed-table prefix="ispn_bucket">
				<id-column name="id" type="VARCHAR(500)" />
				<data-column name="datum" type="VARBINARY(60000)" />
				<timestamp-column name="version" type="BIGINT" />
			</string-keyed-table>
		</string-keyed-jdbc-store>
	</replicated-cache>
</cache-container>
<cache-container name="modeshape-binary-cache-container"
	aliases="modeshape-binary-cache" module="org.modeshape">
	<transport lock-timeout="60000" />
	<replicated-cache name="sramp-binary-fs" mode="SYNC"
		batching="true">
		<transaction mode="NON_XA" />
		<file-store relative-to="jboss.server.data.dir"
			path="modeshape/binary-store/sramp-binary-data-${jboss.node.name}"
			passivation="false" purge="false" />
	</replicated-cache>
</cache-container>

** ModeShape subsystem:

<subsystem xmlns="urn:jboss:domain:modeshape:1.0">
	<repository name="sramp" cache-name="sramp" cache-container="modeshape"
		security-domain="overlord-idp" anonymous-roles="readonly"
		cluster-name="sramp-cluster" cluster-stack="tcp">
		<indexing rebuild-upon-startup="if_missing" />
		<local-file-index-storage
			path="modeshape/clustered-repo/${jboss.node.name}_indexes" />
		<cache-binary-storage data-cache-name="binary-fs"
			metadata-cache-name="binary-fs-meta" cache-container="modeshape-binary-cache-container" />
	</repository>
</subsystem>

** jgroups tcp stack

<protocol type="TCPPING">
	<property name="initial_hosts">0.0.0.0[7600],0.0.0.0[7600]</property>
	<property name="num_initial_members">2</property>
	<property name="port_range">0</property>
	<property name="timeout">2000</property>
</protocol>

* Using this configuration I experienced following exceptions in server log (but the contents were synchronized and all nodes saw the same content): http://pastebin.test.redhat.com/178801

Comment 1 Thomas Hauser 2013-12-13 17:17:38 UTC
Looks like this is an issue with SRAMP configuration itself, and not really an installer issue (the installer simply executes the .cli files which have been written to configure SRAMP.)

Comment 2 Thomas Hauser 2014-02-25 16:38:25 UTC
Adding needinfo on Eric so he can perhaps suggest changes to the current sramp configuration to resolve this issue.

Comment 3 Eric Wittmann 2014-06-09 12:14:20 UTC
Apologies for not seeing this BZ before now.  Not sure how I missed it.  Unfortunately the pastebin has expired.  Perhaps we can reproduce this and figure out what that error is again?  :)

Comment 4 Stefan Bunciak 2014-06-20 09:08:31 UTC
Created attachment 910711 [details]
Log of server1

I was able to reproduce this with FSW 6.0 GA and attaching log of one of the servers, which contains the errors.

Comment 5 Stefan Bunciak 2014-06-20 11:20:28 UTC
This exception disappeared when added 

   <replication-config>
      <cache-name>web/sso</cache-name>
   </replication-config>

to jboss-web.xml in all the governance wars. The web cache needs synchronous replication.

Comment 6 Eric Wittmann 2014-09-08 11:35:40 UTC
What impact does replication-config have when *not* running in a -ha environment?  If it's simply ignored then I'm happy to add Stefan's jboss-web.xml markup.

Comment 7 Stefan Bunciak 2014-10-20 13:17:32 UTC
My understanding is that <distributable/> tag is processed only when running in a clustered environment. There is no performance impact when running in a single node configuration with <distributable/> in web.xml.


Note You need to log in before you can comment on or make changes to this bug.