Bug 1410776

Summary: [TRACKER] Console Quick Start Guide Feedback
Product: [Red Hat Storage] Red Hat Storage Console Reporter: Anjana Suparna Sriram <asriram>
Component: documentationAssignee: Rakesh <rghatvis>
Status: CLOSED CURRENTRELEASE QA Contact: sds-qe-bugs
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 2CC: agunn, dahorak, khartsoe, nthomas, sankarshan
Target Milestone: ---   
Target Release: 3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-11-19 05:33:29 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Anjana Suparna Sriram 2017-01-06 12:56:13 UTC
Description of problem: Hi,
 
I have some feedback on the documentation used to install the Storage Console, and Jeff recommended I share it with the same group.
 
Thanks,
 
Alan
 
# RH Ceph Storage 2 Installation Guide
 
3.3.3 Calamari Server Installation
 
Stating "The Calamari server runs on Monitor nodes only, and only on one
Monitor  node per storage cluster" seems a vague way of expressing what
appears to be a requirement. This is an installation guide, and so perhaps
it should state the server should be installed on one, and only one, Monitor
node.
 
The note that "Currently, the Calamari administrator user name and password is
hard-coded as admin and admin respectively" is out of place. First, it's not
clear that the creds are hard-coded somewhere else (the Storage Console). The
example lists admin/admin, but the note should really make it clear that these
are the only valid creds the user should specify. If you don't enter
admin/admin then the Storage Console won't appear to work!
 
Step 4 directs the user to enable and restart the supervisord service, but
I wonder if this step is needed. The "calamari-ctl initialize" command
displays a messages that states, "Starting/enabling supervisord..." so it
looks like the work is done automatically.
 
# Storage Console Quickstart Guide
 
2.4.1 Configuring Red Hat Storage Console Server Ports
 
The "Ceph Installer" instructions tell you to "--add-service=ceph-installer,"
and this is followed by a note that you need to ensure the ceph-installer
package is installed beforehand (per section 3.1). The firewall-cmd will fail
if you execute it before installing the ceph-installer package. To keep things
simple, it would be much easier to direct the user to "--add-port=8181/tcp,"
which happens to be the ceph-installer's port.
 
3.3 Red Hat Storage Console Server Configuration
 
Step 2 (create superuser acess to Graphite-web) should offer some guidance. I
had no previous knowledge that Graphite is part of the Storage Console
ecosystem, and so I had no idea whether I needed to create a superuser
account. What's the consequence if I don't create one? Can I go back and
create one later?
 
Step 4 (enter FQDN of the node) has a surprisingly dangerous pitfall. If you
enter a real FQDN (such as the host's name) then it's absolutely necessary
for the name to be resolvable by the client. That's because the Apache
server that redirects clients to the skyring port (10443) will specify the
FQDN in the HTTP redirection, and so the redirection will fail unless the
client can resolve the FQDN, even if the client knows the IP address of the
server! In my environment, many machines have FQDN that aren't externally
resolvable (no DNS), and so I access servers by their IP address. But even
if I try to access the Storage Console by its IP address, Apache will
redirect me to an FQDN that I cannot resolve. The workaround (which should be
highlighted) is to enter the server's IP address as its FQDN.
 
6 Importing Cluster
 
The opening paragraph states the "Red Hat Storage Console agent must be
installed, configured and activated in each participating Ceph Storage nodes."
This effectively conveys a requirement, and I knew I had to follow the
instructions provided in order to satisfy the requirement.
 
But a little later, in Step 2, there's a note that states, "Ensure that the
selected host is a Monitor host and the Calamari server is installed and
running on it, failing which the import cluster operation will result in an
error." While true, the note is pretty vague, and "will result in an error"
just leaves the reader hanging.
 
I recommend reformatting the note so that it's consistent with the text in
the opening paragraph. This will make it clear there really are two
requirements:
  1) Storage Console agent must be running on all Ceph nodes
  2) Calamari server must be running on one (and only one!) MON node
 
Step 2 Figure 6.2 shows an example of the available hosts identified by
the console agents running on the Ceph nodes, and the nodes are identified by
their host name. This is fine, except it's not clear that the Storage Console
MUST be able to resolve these names. In my OpenStack environment, the host
names and their IP addresses are assigned by the OSP Director, and there's no
easy way for the Storage Console to resolve them using DNS. What I have to do
is manually generate entries for the Storage Console's /etc/hosts file. To
that end, I recommend a note that states the host names must be resolvable,
either by DNS or entries in the Storage Console's /etc/hosts. I should also
note that the failure behavior is non-obvious. You press the "continue" button
and nothing seems to happen. I discovered it was a name lookup error only by
wading through the logs.


Version-Release number of selected component (if applicable): 2.0


Additional info: Bobb has summerized this feedback into a google doc:  https://docs.google.com/a/redhat.com/document/d/1wF7LvSACGABPgcUeGCONGTOxI2fo7mXs8NufklXuxJY/edit?usp=sharing

Comment 2 khartsoe@redhat.com 2017-01-06 13:21:04 UTC
Adding previous related feedback from Bob Buckley to also include in Console 2 revisions.

See Bob's comments following "QSG: Skipped thru to section 2.4" statement in 
https://bugzilla.redhat.com/show_bug.cgi?id=1400017#c5