Description of problem: Hi,
I have some feedback on the documentation used to install the Storage Console, and Jeff recommended I share it with the same group.
# RH Ceph Storage 2 Installation Guide
3.3.3 Calamari Server Installation
Stating "The Calamari server runs on Monitor nodes only, and only on one
Monitor node per storage cluster" seems a vague way of expressing what
appears to be a requirement. This is an installation guide, and so perhaps
it should state the server should be installed on one, and only one, Monitor
The note that "Currently, the Calamari administrator user name and password is
hard-coded as admin and admin respectively" is out of place. First, it's not
clear that the creds are hard-coded somewhere else (the Storage Console). The
example lists admin/admin, but the note should really make it clear that these
are the only valid creds the user should specify. If you don't enter
admin/admin then the Storage Console won't appear to work!
Step 4 directs the user to enable and restart the supervisord service, but
I wonder if this step is needed. The "calamari-ctl initialize" command
displays a messages that states, "Starting/enabling supervisord..." so it
looks like the work is done automatically.
# Storage Console Quickstart Guide
2.4.1 Configuring Red Hat Storage Console Server Ports
The "Ceph Installer" instructions tell you to "--add-service=ceph-installer,"
and this is followed by a note that you need to ensure the ceph-installer
package is installed beforehand (per section 3.1). The firewall-cmd will fail
if you execute it before installing the ceph-installer package. To keep things
simple, it would be much easier to direct the user to "--add-port=8181/tcp,"
which happens to be the ceph-installer's port.
3.3 Red Hat Storage Console Server Configuration
Step 2 (create superuser acess to Graphite-web) should offer some guidance. I
had no previous knowledge that Graphite is part of the Storage Console
ecosystem, and so I had no idea whether I needed to create a superuser
account. What's the consequence if I don't create one? Can I go back and
create one later?
Step 4 (enter FQDN of the node) has a surprisingly dangerous pitfall. If you
enter a real FQDN (such as the host's name) then it's absolutely necessary
for the name to be resolvable by the client. That's because the Apache
server that redirects clients to the skyring port (10443) will specify the
FQDN in the HTTP redirection, and so the redirection will fail unless the
client can resolve the FQDN, even if the client knows the IP address of the
server! In my environment, many machines have FQDN that aren't externally
resolvable (no DNS), and so I access servers by their IP address. But even
if I try to access the Storage Console by its IP address, Apache will
redirect me to an FQDN that I cannot resolve. The workaround (which should be
highlighted) is to enter the server's IP address as its FQDN.
6 Importing Cluster
The opening paragraph states the "Red Hat Storage Console agent must be
installed, configured and activated in each participating Ceph Storage nodes."
This effectively conveys a requirement, and I knew I had to follow the
instructions provided in order to satisfy the requirement.
But a little later, in Step 2, there's a note that states, "Ensure that the
selected host is a Monitor host and the Calamari server is installed and
running on it, failing which the import cluster operation will result in an
error." While true, the note is pretty vague, and "will result in an error"
just leaves the reader hanging.
I recommend reformatting the note so that it's consistent with the text in
the opening paragraph. This will make it clear there really are two
1) Storage Console agent must be running on all Ceph nodes
2) Calamari server must be running on one (and only one!) MON node
Step 2 Figure 6.2 shows an example of the available hosts identified by
the console agents running on the Ceph nodes, and the nodes are identified by
their host name. This is fine, except it's not clear that the Storage Console
MUST be able to resolve these names. In my OpenStack environment, the host
names and their IP addresses are assigned by the OSP Director, and there's no
easy way for the Storage Console to resolve them using DNS. What I have to do
is manually generate entries for the Storage Console's /etc/hosts file. To
that end, I recommend a note that states the host names must be resolvable,
either by DNS or entries in the Storage Console's /etc/hosts. I should also
note that the failure behavior is non-obvious. You press the "continue" button
and nothing seems to happen. I discovered it was a name lookup error only by
wading through the logs.
Version-Release number of selected component (if applicable): 2.0
Additional info: Bobb has summerized this feedback into a google doc: https://docs.google.com/a/redhat.com/document/d/1wF7LvSACGABPgcUeGCONGTOxI2fo7mXs8NufklXuxJY/edit?usp=sharing
Adding previous related feedback from Bob Buckley to also include in Console 2 revisions.
See Bob's comments following "QSG: Skipped thru to section 2.4" statement in