Bug 1496917

Summary: Troubleshooting: Should have a case for ports 6800:7100 not open for ceph-mgr.
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: John Wilkins <jowilkin>
Component: DocumentationAssignee: John Wilkins <jowilkin>
Status: CLOSED CURRENTRELEASE QA Contact: Persona non grata <nobody+410372>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.0CC: anharris, asriram, hnallurv, jowilkin, kdreyer, khartsoe
Target Milestone: rc   
Target Release: 3.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-07-09 08:32:47 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description John Wilkins 2017-09-28 18:38:19 UTC
Description of problem:

In RHCS 3.0, all stats are reported to the ceph-mgr daemon. However, in order for this to occur, the same ports that are open for the Ceph OSDs (6800-7100) must also be open. If this does not happen, problems arise. 

The cluster shortly devolves from HEALTH_OK to HEALTH_WARN, with the PGs stuck and stale. ceph -s or ceph -w will show that 100% of placement groups are "unknown." 

Expected results:

User will expect to see HEALTH_OK, but will see HEALTH_WARN. 


Additional info:

We probably need to cover what happens if the ceph-mgr daemon doesn't get installed, if it is not running, and if it is running but the correct ports aren't open.

Comment 7 Harish NV Rao 2018-08-14 11:24:28 UTC
Please share the actual doc link

Comment 8 Harish NV Rao 2018-08-16 10:35:16 UTC
(In reply to Harish NV Rao from comment #7)
> Please share the actual doc link

A gentle reminder.