Bug 1123157

Summary: [RFE] Need OSP installer to be able to support adding compute nodes
Product: Red Hat OpenStack Reporter: arkady kanevsky <arkady_kanevsky>
Component: rubygem-staypuftAssignee: Mike Burns <mburns>
Status: CLOSED ERRATA QA Contact: nlevinki <nlevinki>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 5.0 (RHEL 7)CC: aberezin, ajeain, cdevine, christopher_dearborn, jdonohue, kschinck, mburns, nlevine, randy_perryman, rhos-maint, slong, sreichar, yeylon
Target Milestone: z1Keywords: FutureFeature
Target Release: Installer   
Hardware: x86_64   
OS: Linux   
URL: https://trello.com/c/Td6aJcv3
Whiteboard: MVP
Fixed In Version: ruby193-rubygem-staypuft-0.3.4-2.el6ost Doc Type: Enhancement
Doc Text:
Previously, users could not add compute nodes to an existing deployment to increase their cloud capacity. With this enhancement, users can now use the RHEL OpenStack Platform installer to add new compute nodes to existing clouds.
Story Points: ---
Clone Of:
: 1123158 1127423 (view as bug list) Environment:
Last Closed: 2014-10-01 13:25:32 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1108193, 1127423    

Description arkady kanevsky 2014-07-25 03:05:00 UTC
Description of problem:
This is to allow installer to add a node to a compute group. This includes installing operating system, an extra libraries needed for ceph storage, all networks setup. (For the future extra Ceph keyring file if ceph is one of storage block back-ends for ephemeral storage and/or VM live migration).

For node deletion, VM migration is NOT part of this RFE. It is assumed that when a compute node is removed it does not have any user VMs or containers on it. But specifying in OSP installer that node is removed, or removed automatically after some administrator specified timeout, cleanup of OpenStack database, and OSP installer one are part of RFE.

For node replacement, Installer should provide a way to specify which node in compute cluster should be replaced, it could be down already, or still active.
But new node should attain identity of replaced node, including its network connections and its IP addresses. For this RFE assume that there no user VMs or containers on the node. (for the future we should add RFE for replacing that have user VMs or containers). Start with KVM hypervisor first. 

Version-Release number of selected component (if applicable):
N/A

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Mike Burns 2014-07-25 13:38:30 UTC
*** Bug 1123158 has been marked as a duplicate of this bug. ***

Comment 4 Perry Myers 2014-08-06 20:45:01 UTC
We need to treat adding and removing compute nodes as separate bugs because the work involved with each is different.  This bug will track adding new compute nodes.  It will be cloned to a different bug for removing.

Also, replacement == remove + add, so we don't need to track anything special for replacement.

Comment 7 nlevinki 2014-09-26 19:25:44 UTC
I installed 1 controller, 1 neutron and 1 compute node with ceph as storage for glance and cinder.
the installation passed and ceph keyring files were deployed on the controller and compute.
I then add another compute node in the installer and click on the deploy button.
the deployment passed and ceph directory was installed on the new compute node.
I also verified that from horizon ui we can see both compute nodes.

Comment 9 errata-xmlrpc 2014-10-01 13:25:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1350.html