Bug 1400017
Summary: | Updates required for console agent install procedure in the 2.0 Ceph Install Guide | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Anjana Suparna Sriram <asriram> |
Component: | Documentation | Assignee: | Aron Gunn <agunn> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Vidushi Mishra <vimishra> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 2.1 | CC: | agunn, bbuckley, dahorak, hnallurv, kdreyer, khartsoe, mkudlej |
Target Milestone: | rc | Flags: | agunn:
needinfo+
|
Target Release: | 2.2 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-03-21 23:48:54 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Anjana Suparna Sriram
2016-11-30 09:59:22 UTC
Sudo is not missing because commands should be done under root account: <cite>As root, install and configure the Red Hat Storage Console Agent: </cite> The same sentence is also in Ubuntu book. https://access.redhat.com/documentation/en/red-hat-ceph-storage/2/single/installation-guide-for-ubuntu#installing_and_configuring_the_red_hat_storage_console_agent But you're right that commands with sudo should be preferred in Ubuntu book. There is no need to set up passwordless ssh for node installation for Console. This is done in step: curl console2:8181/setup/agent | bash Comment tracking document: https://docs.google.com/document/d/1kEX1sntjlEyjfmLIYiGkU9_pnnScpp1iyuU1Z7EoemM/edit# Take no offense at my notes - just my style as I write notes while going thru the steps... Short version - still a good amount of confusion - can elaborate if needed - but the console is up and the hosts are slowly appearing. Testing with new doc and following VMs # 1 - admin node - ceph-admin - 192.168.122.100 # 1 - client node - client - 192.168.122.130 # 3 - mon nodes - mon1, mon2, mon3 - 192.168.122.101-103 # 3 - osd nodes - osd1, osd2, osd3 - 192.168.122.111-113 # I do have a few variations for VMs versus a real install on bare metal in a true operations environment (with DNS, IDM, etc) so... Section 2.2 Log into ceph-admin as root # subscription-manager register # subscription-manager refresh # subscription-manager attach --pool=8a85f981568e999d01568ed222cd6712 # subscription-manager repos --disable '*' # subscription-manager repos --enable=rhel-7-server-rpms # yum update -y ssh to mon1 & duplicate above subscription-manager & yum steps ssh to mon2 & duplicate above subscription-manager & yum steps ssh to mon3 & duplicate above subscription-manager & yum steps ssh to osd1 & duplicate above subscription-manager & yum steps ssh to osd2 & duplicate above subscription-manager & yum steps ssh to osd3 & duplicate above subscription-manager & yum steps ssh to client & duplicate above subscription-manager & yum steps Section 2.3 On mons: # subscription-manager repos --enable=rhel-7-server-rhceph-2-mon-rpms On osds: # subscription-manager repos --enable=rhel-7-server-rhceph-2-osd-rpms On Client: # subscription-manager repos --enable=rhel-7-server-rhceph-2-tools-rpms Console agent - unclear which nodes (OSDs, MONs, Clients, RGW, ???) are included in description -->> "all Red Hat Ceph Storage nodes under Red Hat Storage Console’s control," I chose to do all OSDs and MONs Console & Ansible Installer - unclear if this is installed on Console or OSDs, MONs, etc, or all of the above... I chose to do all OSDs and MONs in addition to ceph-admin (Console) Drops right into ISO install section - should redirect around... Sections 2.4 & 2.5 seem to be misplaced - skipped them entirely... Section 2.6 - N/A Section 2.7 - verified Section 2.8 - set up /etc/hosts then scp'd to all nodes Section 2.9 Configuring Access Step 1 - On which nodes?? MONs, OSDs, Clients, admin, ?? I checked ALL nodes Step 2 - done on all MON nodes Step 3 - How the heck do I know where calimari-lite is running...??? Ran it on all three Getting tired of entering the password every time I change systems - why didn't we set up passwordless ssh??? Section 2.10 - ntp - done Section 2.11 - Username - bubba pw - redhat NOTE: The doc keeps using the term "Ceph node" but needs to define if that is OSDs and MONs only or does it include RGW, MDS, Console, etc. - too vague Section 2.1 step 3 - a bit vague... Section 3 - OK referring me to the quick start quide that was many of the issues previusly and is not rewritten... QSG: Skipped thru to section 2.4 2.4.1 First thing is says is to do these steps AFTER I install the console - so put this section AFTER the install... 2.4.2 & 2.4.3 - wording is confusing - do I execute on OSD and MON nodes or am I executing on admin node so the MON and OSD nodes can communicate?? I now have no hair left on the right side of my head and it is 1118PM.... Skipping to section 3 since I need to install before I configure... Step 5 adds a new repo in addition to the ones I did earlier - perhaps I did the ones earlier in error but that is due to vague instructions... Exectued thru step 8 - this is where I GUESS I need to go back and now execte the firewall settings... Secton 3.2 - Skipping but should say (skip if you are not doing qcow2 install... Section 3.3 - Almost missed this... Have to use FQDN with Firefox, add exceptions, Jumping back to install guide section 3.1.1 OK , NOW the doc is getting pedantic - having me relive the steps I spent hours going thru before I went to the QSG... Installing and Configuring "As root,..." where???? Assuming OSD and MON nodes OK - despite all my griping (hey it's late and I am tired - no offense intended...) the nodes are starting to appear in the host list on the console window so I'd say we have success to this point. Posting these notes for your review and action as you see fit. Will continue tomorrow. Additional note from yesterday - The ntp section mentions making sure the nodes are peers but then is very subtle about doing this - should eiather be explicitly explained or more boldly mentioned as a requirement. It also seems that having ntp running and peering across all nodes is necessary to have the cluster show up as healthy on the RHSC2. Section 3.1.1 - worked as indicated but when I get to 3.1.2 it points me back to the QSG - why all this jumping back and forth - adds to the confusion Over to the QSG again section 5 Section 5.1.4 - Change the wording to "Select only one Cluster and one Access network from the list." current wording is not clear. Section 7.1 step 4 - the PG calc tool has many options and instructions are unclear... Created obj & rbd pools and an rbd device - all were successful but cluster is WARN status - cannot track down the what/why... OK - Back to the install guide section 3.1.3 -- oops, points me back to QSG For the love of (fill in your preferred diety or non-diety here) - put this in one doc!! Skipping 3.2 (ansible install) Skipping 3.3 (CLI install) Chapter 4 - Clients Section 3.1.1. Installing and Configuring the Red Hat Storage Console Agent The first `sudo` command (before `curl`) should not be necessary there: $ sudo curl <FQDN_RHS_Console_node>:8181/setup/agent/ | sudo bash It will work this way too, but it doesn't make much sense to run `curl` with root privileges. This would be slightly clearer: $ curl <FQDN_RHS_Console_node>:8181/setup/agent/ | sudo bash Hi Aron , When can we expect a document fix for the bug? |