Bug 1027740 - Unable to fetch storage nodes when >1 nodes failed to connect to cluster
Unable to fetch storage nodes when >1 nodes failed to connect to cluster
Product: RHQ Project
Classification: Other
Component: Core UI, Storage Node (Show other bugs)
x86_64 Linux
unspecified Severity high (vote)
: GA
: RHQ 4.10
Assigned To: John Sanda
Mike Foley
Depends On:
Blocks: 1027892
  Show dependency treegraph
Reported: 2013-11-07 06:07 EST by Armine Hovsepyan
Modified: 2015-09-02 20:02 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2014-04-23 08:30:27 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
unableToFetchStoragenodes (120.56 KB, image/png)
2013-11-07 06:07 EST, Armine Hovsepyan
no flags Details
server.log (198.96 KB, text/x-log)
2013-11-07 06:08 EST, Armine Hovsepyan
no flags Details
storage node UI timeout (266.85 KB, image/png)
2013-11-10 07:25 EST, John Sanda
no flags Details

  None (edit)
Description Armine Hovsepyan 2013-11-07 06:07:43 EST
Created attachment 821016 [details]

Description of problem:
Unable to fetch storage nodes when >1 nodes failed to connect to cluster

Version-Release number of selected component (if applicable):
rhq 4.10 d3ea23b

How reproducible:
2 out of 2

Steps to Reproduce:
 1. install server with N1
 2.install N2
 3. install N3
 4. install n4 so that it fails the add to maintenance state
 5. stop N2, n3 and N4
 6. remove rhq-storage-auth from N1
 7. install N5  -  it tries to connect to N2 and fails
 8. start N2, N3 and N4
 9. get back rhq-storage-auth in N1
 10. restart N5
 11. add manually N1 IP to N5 rhq-storage-auth
 12. add manually N5 IP into N1's rhq-storage-auth
 13. restart N5

Actual results:
unable to see nodes in Administration -> Storage Nodes

Expected results:
Storage node administration still function

Additional info:
screen-shot and fragment from server.log attached
Comment 1 Armine Hovsepyan 2013-11-07 06:08:20 EST
Created attachment 821017 [details]
Comment 2 Mike Foley 2013-11-07 07:21:15 EST
Is there a corressponding JON BZ?  If not, please consider cloning this, and consider it a blocker for JON 3.2 GA.
Comment 5 Jirka Kremser 2013-11-07 09:03:01 EST
Armine, could you try to minimize the repro steps? Is it necessary to install all 5 storage nodes?
Comment 6 Heiko W. Rupp 2013-11-07 09:03:40 EST
I don't think this is a typical case and I consider that rather a documentation issue, where we should point out that adding new storage nodes require at least one storage node that is up and running correctly.
Comment 7 Armine Hovsepyan 2013-11-07 09:07:51 EST
The issue was hitted during the reproduction of #1026088, but yes, the it can be considered as  "bad" scenario.
Comment 8 Jirka Kremser 2013-11-07 11:00:40 EST
There is an error in UI because the call fetching the nodes expires. server log says: Cassandra timeout during read query at consistency ONE (1 responses were required but only 0 replica responded)

Perhaps it is relaged to bug 1017372.
Comment 9 John Sanda 2013-11-10 07:25:32 EST
Created attachment 822064 [details]
storage node UI timeout
Comment 10 John Sanda 2013-11-10 07:26:00 EST
I do not think that the root cause has anything to query a single node. Nor do I think the issue is a Cassandra read timeout. These things certainly can and will contribute to fetching storage nodes in the admin view longer; however, the problem is much more fundamental with respect to how we query the storage nodes. The method in question, StorageNodeManagerBean.getStorageNodeComposites, performs a lot of work for each storage node.

The issue can be reproduced much more easily than the steps in the initial description. I did so by setting up a 5 node cluster with virtually no load on either postgres or on the storage cluster. I updated getStorageComposites to log the execution time. Here are the results from the past two attempts that timed out in the UI:

07:07:51,516 INFO  [org.rhq.enterprise.server.cloud.StorageNodeManagerBean] (http-jsanda-storage5.bc.jonqe.lab.eng.bos.redhat.com/ Retrieved storage node composites in 26272 ms
07:10:53,341 INFO  [org.rhq.enterprise.server.cloud.StorageNodeManagerBean] (http-jsanda-storage5.bc.jonqe.lab.eng.bos.redhat.com/ Retrieved storage node composites in 28589 ms

My server.log shows that the method completed without error but the results do not make it to the UI because the server simply takes too long to respond. I am attaching a screenshot of the UI.
Comment 11 John Sanda 2013-11-10 14:16:10 EST
I deployed another node, increasing the cluster size to 6 nodes. Here is my server.log from several repeated, failed attempts to load the storage nodes view.

14:08:59,849 INFO  [org.rhq.enterprise.server.cloud.StorageNodeManagerBean] (http-jsanda-storage5.bc.jonqe.lab.eng.bos.redhat.com/ Retrieved storage node composites in 25856 ms
14:10:37,558 INFO  [org.rhq.enterprise.server.cloud.StorageNodeManagerBean] (http-jsanda-storage5.bc.jonqe.lab.eng.bos.redhat.com/ Retrieved storage node composites in 25958 ms
14:11:50,880 INFO  [org.rhq.enterprise.server.cloud.StorageNodeManagerBean] (http-jsanda-storage5.bc.jonqe.lab.eng.bos.redhat.com/ Retrieved storage node composites in 25866 ms
14:12:51,662 INFO  [org.rhq.enterprise.server.cloud.StorageNodeManagerBean] (http-jsanda-storage5.bc.jonqe.lab.eng.bos.redhat.com/ Retrieved storage node composites in 25721 ms

The numbers will get worse when the database and/or cluster has some actual load and as the number of nodes increases.
Comment 12 Jirka Kremser 2013-11-11 08:26:23 EST
Trying 6 nodes on 1 machine:

14:19:19,549 INFO  [org.rhq.enterprise.server.cloud.StorageNodeManagerBean] (http-/ Retrieved storage node composites in 2492 ms
14:19:23,378 INFO  [org.rhq.enterprise.server.cloud.StorageNodeManagerBean] (http-/ Retrieved storage node composites in 2518 ms
14:19:27,138 INFO  [org.rhq.enterprise.server.cloud.StorageNodeManagerBean] (http-/ Retrieved storage node composites in 2435 ms

There is actually just 1 node installed in my env., but 6 records in the relational DB leading to the same resource.

What it shows is that the time complexity is linear to the number of nodes (for just 1 node the time was about 400ms (=2492 / 6)) iff there is no inter-node communication.

In the real environment, where the storage nodes are interconnected the situation is obviously different and the complexity is not linear. 

Still investigating..
Comment 13 Jirka Kremser 2013-11-11 09:18:46 EST
StorageNodeManager.getLoad() method was calling MeasurementManager.getMeasurementAggregate() that was calling the MetricServer.getSummaryAggregate(). 

By removing this intermediate call I was able to improve the times to ~700ms.
Comment 14 John Sanda 2013-11-11 23:21:17 EST
I have pushed a few commits to the branch jsanda/load-storage-composites (https://git.fedorahosted.org/cgit/rhq/rhq.git/log/?h=jsanda/load-storage-composites). Jirka and I discussed some of those changes earlier today. I made 3 changes,

1) Retrieve unacknowledged alerts in one query instead of N (where N is number of storage nodes).

2) Fetch metric data directly from MetricsServer instead of going through MeasurementDataManagerBean.getMeasurementAggregate.

3) Load metric data asynchronously so that data for multiple schedules can be fetched in parallel.

With these changes, the time to load the storage node composites drops from about 25 secs to about 2.5 seconds. The disparity should be even greater with some actual load on either postgres or on the storage cluster.
Comment 15 John Sanda 2013-11-12 20:13:08 EST
Changes have been pushed to master.

master commit hashes:
Comment 16 John Sanda 2013-11-14 13:27:21 EST
Two more commits that went into master for this.


I also want to note the dramatic performance improvements as a result of all the changes. I have an 8 node cluster and two servers with which I have been testing. One server has been running the unoptimized code while the other has been running the optimized versions. The unoptimized version of StorageNodeManagerBean.getStorageNodeComposites took over 200 seconds to complete.  With all of the optimizations in place, it takes less than 200 milliseconds.
Comment 17 Heiko W. Rupp 2014-04-23 08:30:27 EDT
Bulk closing of 4.10 issues.

If an issue is not solved for you, please open a new BZ (or clone the existing one) with a version designator of 4.10.

Note You need to log in before you can comment on or make changes to this bug.