Bug 1249557 - [RFE] install OSD's on multiple disks
Summary: [RFE] install OSD's on multiple disks
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director
Version: 7.0 (Kilo)
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: 7.0 (Kilo)
Assignee: chris alfonso
QA Contact: Yogev Rabl
URL:
Whiteboard: RFE's for Ceph installation with OSP ...
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-08-03 10:04 UTC by Yogev Rabl
Modified: 2016-02-01 02:37 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-02 13:04:19 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Yogev Rabl 2015-08-03 10:04:10 UTC
Description of problem:
The OSP director installs a single Ceph OSD on each Ceph storage node. A single storage server (or even a simple JBOD) can run multiple amount of OSDs, with each OSD on a different disk.

At the moment (OSP director version: 

openstack-heat-api-cfn-2015.1.0-4.el7ost.noarch
python-tuskarclient-0.1.18-3.el7ost.noarch
openstack-heat-api-cloudwatch-2015.1.0-4.el7ost.noarch
openstack-tripleo-heat-templates-0.8.6-45.el7ost.noarch
python-heatclient-0.6.0-1.el7ost.noarch
openstack-heat-templates-0-0.6.20150605git.el7ost.noarch
openstack-tuskar-ui-extras-0.0.4-1.el7ost.noarch
openstack-heat-api-2015.1.0-4.el7ost.noarch
openstack-heat-common-2015.1.0-4.el7ost.noarch
openstack-tuskar-ui-0.3.0-13.el7ost.noarch
openstack-heat-engine-2015.1.0-4.el7ost.noarch
openstack-tuskar-0.4.18-3.el7ost.noarch ) 

the director install the OSD and run in on the same disk as the OS of the host and there are no additional parameters in the case the host has additional HD that can be used as the disks for the OSD's. This is far than ideal, due to the potential work load of the OSD on the disk. 

Expected results:
The Director should have parameters that allow the user to install Ceph OSD or OSDs on different disks.

Comment 4 Giulio Fidente 2015-08-14 15:59:35 UTC
It is currently possible to use multiple disks to set up multiple Ceph OSDs on a single Ceph storage node, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/7/html/Director_Installation_and_Usage/sect-Advanced-Scenario_3_Using_the_CLI_to_Create_an_Advanced_Overcloud_with_Ceph_Nodes.html#sect-Advanced-Configuring_Ceph_Storage

Is that what the RFE is about?

Comment 5 Giulio Fidente 2015-12-02 09:18:41 UTC
I think we should close this as WORKSFORME; we can install multiple OSDs on a single cephstorage node already.

Comment 6 Scott Lewis 2015-12-02 12:59:14 UTC
(In reply to Giulio Fidente from comment #5)
> I think we should close this as WORKSFORME; we can install multiple OSDs on
> a single cephstorage node already.

Giulio, this is up to you as the technical expert.


Note You need to log in before you can comment on or make changes to this bug.