Bug 1004900

Summary: rhq-storage cluster that spans both Windows and Linux platforms is broken
Product: [JBoss] JBoss Operations Network Reporter: Armine Hovsepyan <ahovsepy>
Component: Storage NodeAssignee: Michael Burman <miburman>
Status: CLOSED WONTFIX QA Contact: Armine Hovsepyan <ahovsepy>
Severity: high Docs Contact:
Priority: unspecified    
Version: JON 3.2CC: fbrychta, hrupp, jshaughn, loleary, mfoley, miburman, myarboro, snegrea, spinder
Target Milestone: ---   
Target Release: JON 3.3.2   
Hardware: x86_64   
OS: Windows   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-04-24 19:47:57 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 951619    
Attachments:
Description Flags
nodeDown none

Description Armine Hovsepyan 2013-09-05 17:28:18 UTC
Description of problem:
install-uninstall  for  rhq storage installed in linux connected to windoes rhq-server  doesn't work

Version-Release number of selected component (if applicable):


How reproducible:
2 out of 2 

Steps to Reproduce:
1. install rhq server/storage/agent on windows (ip1)
2. install rhq storage/agent on linux -> connect to windows (ip2)
3. run start operation on ip2 storage from server-gui
4. restart storage using rhqctl script 
5. un-deremploy operation run from storage nodes administration 


Actual results:
after step2.

storage node ip2 not active -> screen-shot --> http://d.pr/i/1z4l
exceptions in rhq-storage.log -> http://pastebin.test.redhat.com/162235

after step3.
IOException: An established connection was aborted by the software in your host machine  --- in rhq-storage.log on ip1

after step4.
both storages in normal/available modes -> http://d.pr/i/Ip06

after step5.
exceptons in rhq-storage.log -> http://pastebin.test.redhat.com/162246
storage node down in GUI while the cassandra process running  -> http://d.pr/i/ClRb

Expected results:
after step2. both storages in normal mode
no need for step 3 and 4
after ste 5. stroage node in ip2 un-deployed without issues

Additional info:
full server and rhq-storage logs below:
rhq-storage-linux -> http://d.pr/f/XXuh
rhq-storage-windows -> http://d.pr/f/C8rw
rhq-server-windows -> http://d.pr/f/GgEq


OS IP1: windows 2008
DB IP1: Postgres 9.2
Java IP1: oracle jre 1.7


OS IP2: RHEL 6.4
JAVA IP2: Open JDK 1.6.0_24

Comment 2 Armine Hovsepyan 2013-09-09 15:20:34 UTC
update:

 storage node on linux can be connected to server in windows, with exception in rhq-storage.log -> http://pastebin.test.redhat.com/162636

un-deploy is as described in bug reproduction description

Comment 3 Armine Hovsepyan 2013-10-10 17:36:16 UTC
update:
storage installed on linux and connected to server on windows is not available
screenshot attached
full storage.log here -> http://d.pr/f/QmNk

Comment 4 Armine Hovsepyan 2013-10-10 17:37:00 UTC
Created attachment 810680 [details]
nodeDown

Comment 9 Jay Shaughnessy 2014-08-26 14:04:19 UTC
This should likely be looked into prior to 3.3 release.  I'm not sure but I recall being unable to add a storage node on Linux to my existing 1-node cluster on Windows.

Comment 11 Simeon Pinder 2014-09-29 08:12:40 UTC
Moving into ER05 as didn't make the ER04 cut.

Comment 19 Simeon Pinder 2015-01-19 20:52:51 UTC
Moving into CR01 target milestone as missed ER01 cutoff.

Comment 22 Larry O'Leary 2015-04-24 19:47:57 UTC
As this is not a typical installation and no users have reported/had the need to distribute the storage cluster across unrelated platforms, I am closing this bug as won't fix.

Please note that this does not mean that we should not support this. The expectation is that as long as a supported JVM is in use, we should work. Therefore, this IS a bug. However, due to available resources and that this is an edge case, it does not make sense to spend the large amount of effort to further investigate and potentially fix this issue at this time.