Bug 1123157 - [RFE] Need OSP installer to be able to support adding compute nodes
Summary: [RFE] Need OSP installer to be able to support adding compute nodes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rubygem-staypuft
Version: 5.0 (RHEL 7)
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: z1
: Installer
Assignee: Mike Burns
QA Contact: nlevinki
URL: https://trello.com/c/Td6aJcv3
Whiteboard: MVP
: 1123158 (view as bug list)
Depends On:
Blocks: 1108193 1127423
TreeView+ depends on / blocked
 
Reported: 2014-07-25 03:05 UTC by arkady kanevsky
Modified: 2016-04-26 14:03 UTC (History)
13 users (show)

Fixed In Version: ruby193-rubygem-staypuft-0.3.4-2.el6ost
Doc Type: Enhancement
Doc Text:
Previously, users could not add compute nodes to an existing deployment to increase their cloud capacity. With this enhancement, users can now use the RHEL OpenStack Platform installer to add new compute nodes to existing clouds.
Clone Of:
: 1123158 1127423 (view as bug list)
Environment:
Last Closed: 2014-10-01 13:25:32 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:1350 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform Bug Fix Advisory 2014-10-01 17:22:34 UTC

Description arkady kanevsky 2014-07-25 03:05:00 UTC
Description of problem:
This is to allow installer to add a node to a compute group. This includes installing operating system, an extra libraries needed for ceph storage, all networks setup. (For the future extra Ceph keyring file if ceph is one of storage block back-ends for ephemeral storage and/or VM live migration).

For node deletion, VM migration is NOT part of this RFE. It is assumed that when a compute node is removed it does not have any user VMs or containers on it. But specifying in OSP installer that node is removed, or removed automatically after some administrator specified timeout, cleanup of OpenStack database, and OSP installer one are part of RFE.

For node replacement, Installer should provide a way to specify which node in compute cluster should be replaced, it could be down already, or still active.
But new node should attain identity of replaced node, including its network connections and its IP addresses. For this RFE assume that there no user VMs or containers on the node. (for the future we should add RFE for replacing that have user VMs or containers). Start with KVM hypervisor first. 

Version-Release number of selected component (if applicable):
N/A

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Mike Burns 2014-07-25 13:38:30 UTC
*** Bug 1123158 has been marked as a duplicate of this bug. ***

Comment 4 Perry Myers 2014-08-06 20:45:01 UTC
We need to treat adding and removing compute nodes as separate bugs because the work involved with each is different.  This bug will track adding new compute nodes.  It will be cloned to a different bug for removing.

Also, replacement == remove + add, so we don't need to track anything special for replacement.

Comment 7 nlevinki 2014-09-26 19:25:44 UTC
I installed 1 controller, 1 neutron and 1 compute node with ceph as storage for glance and cinder.
the installation passed and ceph keyring files were deployed on the controller and compute.
I then add another compute node in the installer and click on the deploy button.
the deployment passed and ceph directory was installed on the new compute node.
I also verified that from horizon ui we can see both compute nodes.

Comment 9 errata-xmlrpc 2014-10-01 13:25:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1350.html


Note You need to log in before you can comment on or make changes to this bug.