Bug 1433061 - Exceptions in Hawkular-Metrics pod
Summary: Exceptions in Hawkular-Metrics pod
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Hawkular
Version: unspecified
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Matt Wringe
QA Contact: Peng Li
URL:
Whiteboard:
Depends On:
Blocks: 1435436
TreeView+ depends on / blocked
 
Reported: 2017-03-16 16:54 UTC by Viet Nguyen
Modified: 2018-07-26 19:09 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1435436 (view as bug list)
Environment:
Last Closed: 2017-03-23 16:01:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
log files (48.64 KB, application/zip)
2017-03-16 16:59 UTC, Viet Nguyen
no flags Details

Description Viet Nguyen 2017-03-16 16:54:45 UTC
Description of problem:
- Single VM with 8G of RAM, 1node/1master on the same VM
- After running 30 nodejs test pods I started seeing exceptions in hawkular-metrics log. I then deleted hawkular-metrics and cassandra pods but the new pods didn't fix the problem.

Version-Release number of selected component (if applicable):
- Origin: 1.4.1 
- Metrics: 1.4.1
- Centos 7
- Deployed on RedHat public OS1, one single VM with 8GB of RAM

How reproducible:
100%

Steps to Reproduce:
1. Install Metrics as instructed
2. Install Hawkular OpenShift Agent
3. Deploy some user pods
4. After several days of running and collecting metrics CPU,RAM, Network graphs are blank. I attempted to delete Hawkular-Metrics and Cassandra pods to no avail.  

Actual results:
Exceptions in hawkular-metrics pod. No metrics data returns from a python test client

Expected results:
- UI displays metrics
- 

Additional info:

Comment 1 Viet Nguyen 2017-03-16 16:59:34 UTC
Created attachment 1263754 [details]
log files

Comment 2 Matt Wringe 2017-03-16 17:44:50 UTC
Do you have any resource limit applied to the Hawkular Metrics, Cassandra and Heapster pods?

If you are trying to do scalability testing, you need to be running the infrastructure components on one OpenShift node and have the pods to be monitored on another. Trying to run everything on one pod for testing purposes is not a good idea, all your pods will be competing for resources.


Note You need to log in before you can comment on or make changes to this bug.