Bug 989970

Summary: CLI upgrade command does not work
Product: [JBoss] JBoss Data Grid 6 Reporter: Vitalii Chepeliuk <vchepeli>
Component: InfinispanAssignee: Tristan Tarrant <ttarrant>
Status: VERIFIED --- QA Contact: Martin Gencur <mgencur>
Severity: high Docs Contact:
Priority: medium    
Version: 6.2.0CC: jdg-bugs
Target Milestone: CR2   
Target Release: 6.2.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Known Issue
Doc Text:
When data is being migrated via CLI from a node with old version of JBoss Data Grid to a node with new version, an error occurs during synchronization. This happens when the upgrade command is called on the new node with parameter --synchronize=hotrod. This results in data not properly migrating from the old node to the new one.
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Vitalii Chepeliuk 2013-07-30 08:32:25 UTC
Description of problem:
See Description of ISPN-3376

Comment 2 JBoss JIRA Server 2013-09-10 08:18:25 UTC
Tristan Tarrant <ttarrant> made a comment on jira ISPN-3376

Can you please include source and target logs ?

Comment 3 JBoss JIRA Server 2013-09-12 14:49:15 UTC
Vitalii Chepeliuk <vchepeli> made a comment on jira ISPN-3376

Added server logs

Comment 4 Vitalii Chepeliuk 2013-11-25 12:51:13 UTC
Testes and looks like this bug is not fixed. The same error as before
I do this  
    In order to perform a rolling upgrade of a HotRod cluster, the following steps must be taken 
    1. Configure and start a new cluster with a RemoteCacheStore pointing to the old cluster and the 
        hotRodWrapping flag enabled 
    2. Configure all clients so that they will connect to the new cluster 
        3. Invoke the 
        upgrade --dumpkeys command on the old cluster for all of the caches that need to be migrated 
    4. Invoke the 
        upgrade --synchronize=hotrod command on the new cluster to ensure that all data is migrated from the old cluster to the new one 
    5. Invoke the 
        upgrade --disconnectsource=hotrod command on the new cluster to disable the RemoteCacheStore used to migrate the data 
    6. Switch off the old cluster 
And get exception
And exception
ISPN019026: An error occurred while synchronizing data for cache 'hotrod' using migrator 'default' from the source server

Comment 5 Tristan Tarrant 2013-12-12 17:51:02 UTC
Vitalii: I have tried several times and cannot reproduce this issue.

Comment 6 Vitalii Chepeliuk 2013-12-13 14:46:37 UTC
Here what I was doing, 
6.1.0.GA server
[remoting://localhost:9999/local/default]> upgrade --dumpkeys default
ISPN019502: Dumped keys for cache default

[remoting://localhost:9999/local/default]> upgrade --dumpkeys --all
ISPN019502: Dumped keys for cache default
ISPN019502: Dumped keys for cache namedCache
ISPN019502: Dumped keys for cache memcachedCache
ISPN019502: Dumped keys for cache ___defaultcache

[remoting://localhost:9999/local/default]> 
6.2.0.ER4 server
[remoting://localhost:10110/local/]> upgrade --synchronize=hotrod default
ISPN019026: An error occurred while synchronizing data for cache 'hotrod' using migrator 'default' from the source server
[remoting://localhost:10110/local/]>  upgrade --synchronize=hotrod 
ISPN019016: No such cache: 'null'
[remoting://localhost:10110/local/]> upgrade --synchronize=hotrod default
ISPN019026: An error occurred while synchronizing data for cache 'hotrod' using migrator 'default' from the source server
[remoting://localhost:10110/local/]> 
[remoting://localhost:10110/local/default]> upgrade --synchronize=hotrod --all
ISPN019026: An error occurred while synchronizing data for cache 'hotrod' using migrator 'default' from the source server


Maybe I am losing something but I do what is written what "help upgrade" says
in USAGE

Comment 7 Tristan Tarrant 2013-12-13 19:14:11 UTC
ah, it looks like it's confusing the cache name with the synchronizer name

Comment 8 Vitalii Chepeliuk 2013-12-16 08:37:15 UTC
(In reply to Tristan Tarrant from comment #7)
> ah, it looks like it's confusing the cache name with the synchronizer name

Yes, that's only problem remains I think

Comment 9 JBoss JIRA Server 2013-12-19 12:33:54 UTC
Mircea Markus <mmarkus> updated the status of jira ISPN-3376 to Resolved

Comment 10 Vitalii Chepeliuk 2014-01-05 22:40:38 UTC
CR1 client uses new protocol version of hotrod 13. When 1.3 client communicates with 6.1.0 GA server following exception is thrown
23:36:05,923 ERROR [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-4) ISPN005003: Exception reported: org.infinispan.server.hotrod.RequestParsingException: Unable to parse header
	at org.infinispan.server.hotrod.HotRodDecoder.readHeader(HotRodDecoder.scala:94) [infinispan-server-hotrod-5.2.4.Final-redhat-1.jar:5.2.4.Final-redhat-1]
	at org.infinispan.server.hotrod.HotRodDecoder.readHeader(HotRodDecoder.scala:45) [infinispan-server-hotrod-5.2.4.Final-redhat-1.jar:5.2.4.Final-redhat-1]
	at org.infinispan.server.core.AbstractProtocolDecoder.decodeHeader(AbstractProtocolDecoder.scala:94) [infinispan-server-core-5.2.4.Final-redhat-1.jar:5.2.4.Final-redhat-1]
	at org.infinispan.server.core.AbstractProtocolDecoder.decode(AbstractProtocolDecoder.scala:70) [infinispan-server-core-5.2.4.Final-redhat-1.jar:5.2.4.Final-redhat-1]
	at org.infinispan.server.core.AbstractProtocolDecoder.decode(AbstractProtocolDecoder.scala:47) [infinispan-server-core-5.2.4.Final-redhat-1.jar:5.2.4.Final-redhat-1]
	at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:500) [netty-3.6.2.Final-redhat-1.jar:3.6.2.Final-redhat-1]
	at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435) [netty-3.6.2.Final-redhat-1.jar:3.6.2.Final-redhat-1]
	at org.infinispan.server.core.AbstractProtocolDecoder.messageReceived(AbstractProtocolDecoder.scala:387) [infinispan-server-core-5.2.4.Final-redhat-1.jar:5.2.4.Final-redhat-1]
	at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) [netty-3.6.2.Final-redhat-1.jar:3.6.2.Final-redhat-1]
	at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560) [netty-3.6.2.Final-redhat-1.jar:3.6.2.Final-redhat-1]
	at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:555) [netty-3.6.2.Final-redhat-1.jar:3.6.2.Final-redhat-1]
	at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) [netty-3.6.2.Final-redhat-1.jar:3.6.2.Final-redhat-1]
	at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) [netty-3.6.2.Final-redhat-1.jar:3.6.2.Final-redhat-1]
	at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) [netty-3.6.2.Final-redhat-1.jar:3.6.2.Final-redhat-1]
	at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107) [netty-3.6.2.Final-redhat-1.jar:3.6.2.Final-redhat-1]
	at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312) [netty-3.6.2.Final-redhat-1.jar:3.6.2.Final-redhat-1]
	at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88) [netty-3.6.2.Final-redhat-1.jar:3.6.2.Final-redhat-1]
	at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) [netty-3.6.2.Final-redhat-1.jar:3.6.2.Final-redhat-1]
	at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [netty-3.6.2.Final-redhat-1.jar:3.6.2.Final-redhat-1]
	at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [netty-3.6.2.Final-redhat-1.jar:3.6.2.Final-redhat-1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_45]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_45]
	at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_45]
Caused by: org.infinispan.server.hotrod.UnknownVersionException: Unknown version:13
	at org.infinispan.server.hotrod.HotRodDecoder.readHeader(HotRodDecoder.scala:80) [infinispan-server-hotrod-5.2.4.Final-redhat-1.jar:5.2.4.Final-redhat-1]
	... 22 more

Threre is no option for upgrade command to specify protocol version of hotrod client to be used.

Comment 12 Martin Gencur 2014-01-10 09:23:46 UTC
Works nicely in CR2. Verified. 

The incorrect warning "ISPN019026: An error occurred while synchronizing data for cache 'hotrod' using migrator 'default' from the source server" (incorrect just because the migrator and the cache name are swapped in it) might still happen but only if there's another problem, such as configuration issue. 

The rolling upgrade process is smooth when performed correctly.

Comment 13 Vitalii Chepeliuk 2014-05-27 09:36:18 UTC
So I think this command does not work now properly
Steps to reproduce
1) Unzip ER4 build and copy to server0 folder
2) Unzip ER4 build and copy to server1 folder
3) copy sample configuration from examples/configs folder 
   a)cp server1/docs/examples/configs/standalone-hotrod-rolling-upgrade.xml to server1/standalone/configuration/
   b)cp server1/docs/examples/configs/standalone-rest-rolling-upgrade.xml to server1/standalone/configuration/
4) change configuration for server1/stanalone/configuration/standalone-hotrod-rolling-upgrade.xml
  a) change default port-offset from 0 to 100
  b) change remote-host to localhost in outbound socket binding configuration
5) start servers
  a) start server0, server0/bin/standalone.sh -c standalone.xml
  b) start server1, server1/bin/standalone.sh -c standalone-hotrod-rolling-upgrade.xml
6) connect to server via cli
  a) server0/bin/ispn-cli.ch
     connect localhost:9999
     cache default
     put --codec=hotrod key1 val1
     put --codec=hotrod key2 val2
     put --codec=hotrod key3 val3
     upgrade --dumpkeys --all
  Should see message
   ISPN019502: Dumped keys for cache namedCache
   ISPN019502: Dumped keys for cache default
   ISPN019502: Dumped keys for cache memcachedCache
   ISPN019502: Dumped keys for cache ___defaultcache
  b) server1/bin/ispn-cli.sh
     connect localhost:10099
     upgrade --synchronize=hotrod
  Following message is shown
    ISPN019026: An error occurred while synchronizing data for cache 'hotrod' using migrator 'default' from the source server

Comment 14 Vitalii Chepeliuk 2014-05-27 09:59:51 UTC
Here is output from my console
[standalone@localhost:10099 /] upgrade --synchronize=hotrod
ISPN019016: No such cache: 'null'
[standalone@localhost:10099 /] upgrade --synchronize=hotrod default
ISPN019026: An error occurred while synchronizing data for cache 'hotrod' using migrator 'default' from the source server
[standalone@localhost:10099 /] upgrade --synchronize=hotrod --all
ISPN019026: An error occurred while synchronizing data for cache 'hotrod' using migrator 'default' from the source server
[standalone@localhost:10099 /] upgrade --synchronize=hotrod --all
ISPN019026: An error occurred while synchronizing data for cache 'hotrod' using migrator 'default' from the source server
[standalone@localhost:10099 /] cache default
[standalone@localhost:10099 /] upgrade --synchronize=hotrod
ISPN019026: An error occurred while synchronizing data for cache 'hotrod' using migrator 'default' from the source server
[standalone@localhost:10099 /]