Bug 1305198 - Bootstrap nodes with Red Hat Storage Controller agent. Configure agent to connect to controller
Summary: Bootstrap nodes with Red Hat Storage Controller agent. Configure agent to con...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat
Component: ceph-installer
Version: 2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 2
Assignee: Alfredo Deza
QA Contact: Daniel Horák
URL:
Whiteboard:
: 1213031 (view as bug list)
Depends On: 1306048
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-02-06 01:50 UTC by Christina Meno
Modified: 2016-08-23 19:46 UTC (History)
12 users (show)

Fixed In Version: ceph-installer-1.0.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-23 19:46:18 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2016:1754 normal SHIPPED_LIVE New packages: Red Hat Storage Console 2.0 2017-04-18 19:09:06 UTC

Description Christina Meno 2016-02-06 01:50:57 UTC
Description of problem:

Red Hat Ceph Storage installer will provide a mechanism to bootstrap nodes
When a cluster node requests the bootstrap endpoint we will serve a bash script that will configure ansible access for the Red Hat Ceph Storage installer

We will then install Salt Minion, RHS Con Agent and configure Salt Minion to point to RHS Controller server

additional info:

https://docs.google.com/document/d/1gtId6oTkNPTjAC8EuDLt-q-6TyZBO8XhPrsBKNEQKNU/edit?ts=56b42796


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Christina Meno 2016-02-09 17:35:15 UTC
*** Bug 1213031 has been marked as a duplicate of this bug. ***

Comment 3 Alfredo Deza 2016-02-10 13:38:24 UTC
Initial take on the bootstrap script:

https://github.com/ceph/mariner-installer/pull/25

Comment 5 Alfredo Deza 2016-02-11 17:25:20 UTC
Greg: is there a general terminology that the community understands for what the "agent" is? would that be a "calamari-lite" ?

In order to get the calamari-lite portion done we need to implement that into the ansible playbooks but those also need to be appropriately named for general consumption as well as the API endpoint that will serve this.

Comment 6 Christina Meno 2016-02-11 19:55:31 UTC
Alfredo: the agent is a package that the Red Hat Storage controller project provides, it is distinct from calamari-lite.

find it here: https://brewweb.devel.redhat.com/buildinfo?buildID=472921

assume for now that the package name for calamari-lite is calamari-server,
find it here:
https://chacra.ceph.com/r/calamari/on_monitor/fedora/22/x86_64/calamari-server-1.4-rc1_78_gab2cadf.el7.centos.x86_64.rpm

Comment 7 Alfredo Deza 2016-02-12 14:42:20 UTC
I don't see any mention of DEB packages or Ubuntu support. 

* Are those to be ignored? Or they live somewhere else?

If I recall correctly there aren't any Calamari packages for upstream. If that is correct then we wouldn't be able to provide a solution that would work for a community/upstream consumer. Is this correct?

Comment 8 Christina Meno 2016-02-12 17:28:48 UTC
We do need to support ubuntu LTS Ken will likely be building that.

Yes we need to get an upstream calamari-lite release out for upstream too.

Comment 9 Alfredo Deza 2016-02-12 21:01:50 UTC
Pull request that implements the functionality to make an agent connect back to a master server:

https://github.com/ceph/ceph-ansible/pull/539

Comment 10 Alfredo Deza 2016-02-17 12:54:24 UTC
Support for both the kickstart script for agent support and API endpoints to configure the agent on a remote node has been completed.

Pull Request: https://github.com/ceph/ceph-installer/pull/42

Docs to follow.

Comment 11 Harish NV Rao 2016-03-04 09:26:44 UTC
(In reply to Alfredo Deza from comment #10)
> Support for both the kickstart script for agent support and API endpoints to
> configure the agent on a remote node has been completed.
> 
> Pull Request: https://github.com/ceph/ceph-installer/pull/42
> 
> Docs to follow.
Can you please let me know when will the docs be available?

Comment 12 Alfredo Deza 2016-03-04 14:17:10 UTC
(In reply to Harish NV Rao from comment #11)
> (In reply to Alfredo Deza from comment #10)
> > Support for both the kickstart script for agent support and API endpoints to
> > configure the agent on a remote node has been completed.
> > 
> > Pull Request: https://github.com/ceph/ceph-installer/pull/42
> > 
> > Docs to follow.
> Can you please let me know when will the docs be available?

They are now: http://docs.ceph.com/ceph-installer/docs/#get--setup-agent-

Comment 22 Daniel Horák 2016-08-02 09:08:31 UTC
Tested and VERIFIED on:
  USM Server/ceph-installer server (RHEL 7.2):
  ceph-ansible-1.0.5-31.el7scon.noarch
  ceph-installer-1.0.14-1.el7scon.noarch
  rhscon-ceph-0.0.39-1.el7scon.x86_64
  rhscon-core-0.0.39-1.el7scon.x86_64
  rhscon-core-selinux-0.0.39-1.el7scon.noarch
  rhscon-ui-0.0.51-1.el7scon.noarch

  Ceph nodes (RHEL 7.2):
  rhscon-agent-0.0.16-1.el7scon.noarch
  rhscon-core-selinux-0.0.39-1.el7scon.noarch
  salt-2015.5.5-1.el7.noarch
  salt-minion-2015.5.5-1.el7.noarch
  salt-selinux-0.0.39-1.el7scon.noarch

I'll retest it also with Ubuntu nodes.

Comment 23 Daniel Horák 2016-08-02 11:35:38 UTC
Tested on Ubuntu 16.04 (as Ceph Nodes):
  rhscon-agent 0.0.16-2redhat1xenial
  salt-common  2015.8.8+ds-1
  salt-minion  2015.8.8+ds-1

GET requests "/setup/agent/" to ceph-installer server works as expected on all tested platforms.

Packages rhscon-agent and salt-minion are properly installed and configured.

>> VERIFIED

Comment 25 errata-xmlrpc 2016-08-23 19:46:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2016:1754


Note You need to log in before you can comment on or make changes to this bug.