Bug 1157730 - Storage cluster did not 'notice' new node added, thus throughput was not changed
Summary: Storage cluster did not 'notice' new node added, thus throughput was not changed
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: JBoss Operations Network
Classification: JBoss
Component: Storage Node
Version: JON 3.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: CR01
: JON 3.3.0
Assignee: John Sanda
QA Contact: Armine Hovsepyan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-10-27 15:09 UTC by Armine Hovsepyan
Modified: 2015-09-03 00:03 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-10-29 15:41:54 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
new_server.log (5.42 MB, text/plain)
2014-10-27 16:58 UTC, Armine Hovsepyan
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1149342 0 unspecified CLOSED Metrics Throughput can increase multiple times for the same node 2021-02-22 00:41:40 UTC

Internal Links: 1149342

Description Armine Hovsepyan 2014-10-27 15:09:40 UTC
Description of problem:
Storage cluster did not 'notice' new node added, thus throughput was not changed

Version-Release number of selected component (if applicable):
JON 3.3 ER05

How reproducible:
reproduced twice (2 trays)

Steps to Reproduce:
1. server with 2 storage nodes installed and running
2. stop both nodes
3. restart both nodes
4. add new node (3rd node)


Actual results:
After step3.  throughput is set to 30K 
After step4. throughput is set to 60K

Expected results:
After step3. throughput must be 60K
After step4. throughput must be 90K


Additional info:
server.log attached

Comment 3 Armine Hovsepyan 2014-10-27 16:58:28 UTC
Created attachment 951076 [details]
new_server.log

Comment 4 Michael Burman 2014-10-27 20:19:46 UTC
As that setup doesn't exists anymore, I'll add this as a comment only for now. When the third node was added, the DataStax driver still reported only 2 nodes up (as seen from the metrics), so I'm not sure if the third node was really joined at that point.

We use the amount of nodes connected as reported by the driver, so the 60k was correct behaviour on our side.

Comment 5 John Sanda 2014-10-27 20:27:39 UTC
I am looked at the logs some. I am not entirely clear what happened with deploying/undeploying nodes. There were a lot of errors though. The throttling changes after steps 3 and 4 however, look correct to me since the environment reports that there storage nodes in the cluster. If there were three nodes in the cluster, then I would expect that the request limit would be 90k after step 4.

Armine, can you try to reproduce without all the of the (un)deploy errors?

Comment 9 John Sanda 2014-10-29 15:41:54 UTC
After a long discussion with Armine, I do not think there is any issue here. We have not been able to reproduce the problems reported. I am going to close this. We can open a different BZs if need be for the other things discussed.


Note You need to log in before you can comment on or make changes to this bug.