Bug 1249557 - [RFE] install OSD's on multiple disks
[RFE] install OSD's on multiple disks
Status: CLOSED WORKSFORME
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director (Show other bugs)
7.0 (Kilo)
x86_64 Linux
high Severity high
: ---
: 7.0 (Kilo)
Assigned To: chris alfonso
Yogev Rabl
RFE's for Ceph installation with OSP ...
: FutureFeature, ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-08-03 06:04 EDT by Yogev Rabl
Modified: 2016-01-31 21:37 EST (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-02 08:04:19 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Yogev Rabl 2015-08-03 06:04:10 EDT
Description of problem:
The OSP director installs a single Ceph OSD on each Ceph storage node. A single storage server (or even a simple JBOD) can run multiple amount of OSDs, with each OSD on a different disk.

At the moment (OSP director version: 

openstack-heat-api-cfn-2015.1.0-4.el7ost.noarch
python-tuskarclient-0.1.18-3.el7ost.noarch
openstack-heat-api-cloudwatch-2015.1.0-4.el7ost.noarch
openstack-tripleo-heat-templates-0.8.6-45.el7ost.noarch
python-heatclient-0.6.0-1.el7ost.noarch
openstack-heat-templates-0-0.6.20150605git.el7ost.noarch
openstack-tuskar-ui-extras-0.0.4-1.el7ost.noarch
openstack-heat-api-2015.1.0-4.el7ost.noarch
openstack-heat-common-2015.1.0-4.el7ost.noarch
openstack-tuskar-ui-0.3.0-13.el7ost.noarch
openstack-heat-engine-2015.1.0-4.el7ost.noarch
openstack-tuskar-0.4.18-3.el7ost.noarch ) 

the director install the OSD and run in on the same disk as the OS of the host and there are no additional parameters in the case the host has additional HD that can be used as the disks for the OSD's. This is far than ideal, due to the potential work load of the OSD on the disk. 

Expected results:
The Director should have parameters that allow the user to install Ceph OSD or OSDs on different disks.
Comment 4 Giulio Fidente 2015-08-14 11:59:35 EDT
It is currently possible to use multiple disks to set up multiple Ceph OSDs on a single Ceph storage node, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/7/html/Director_Installation_and_Usage/sect-Advanced-Scenario_3_Using_the_CLI_to_Create_an_Advanced_Overcloud_with_Ceph_Nodes.html#sect-Advanced-Configuring_Ceph_Storage

Is that what the RFE is about?
Comment 5 Giulio Fidente 2015-12-02 04:18:41 EST
I think we should close this as WORKSFORME; we can install multiple OSDs on a single cephstorage node already.
Comment 6 Scott Lewis 2015-12-02 07:59:14 EST
(In reply to Giulio Fidente from comment #5)
> I think we should close this as WORKSFORME; we can install multiple OSDs on
> a single cephstorage node already.

Giulio, this is up to you as the technical expert.

Note You need to log in before you can comment on or make changes to this bug.