Bug 1593418 - [Doc RFE] Tracker BZ for DFG Workgroup 3.1 documentation requirements
Summary: [Doc RFE] Tracker BZ for DFG Workgroup 3.1 documentation requirements
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Documentation
Version: 3.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 3.1
Assignee: Anjana Suparna Sriram
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On: 1593868 1602913 1602919 1602921 1602925 1602926 1604031
Blocks: 1523244 1581350
TreeView+ depends on / blocked
 
Reported: 2018-06-20 18:50 UTC by Anjana Suparna Sriram
Modified: 2019-02-26 07:26 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-02-26 07:26:40 UTC
Embargoed:


Attachments (Terms of Use)

Description Anjana Suparna Sriram 2018-06-20 18:50:58 UTC
User Story: As a storage admin responsible for object storage, I need useful guidelines (scale limits, workload configurations, etc.) to help me design and configure an environment that is not subject to failure or performance degradation.

Master Documents: Object Gateway Guide for RHEL / Object Gateway Guide for Ubuntu

Comment 3 John Harrigan 2018-07-18 17:29:26 UTC
Updates required in "Ceph Object Gateway for Production" guide

1) TOPIC: Determining and applying expected_num_objects value
   EXPLANATION: In order to help users avoid Filestore splitting operations
                which can dramatically slow client I/O performance. While this
                behaviour can affect all Ceph users, it is especially likely to
                impact RGW customers since they likely have pools with many
                objects. Guide users through the procedure for determining the
                correct value for expected_num_objects and illustrate with
                several customer use cases.

2) TOPIC: Deploying with osd_scenario=lvm for optimal hardware usage on OSD nodes
   EXPLANATION: The installer needs to support placement of performance 
                sensitive Ceph OSD elements such as Filestore journals and RGW
                pools (i.e. bucketIndex) on NVMe devices along with OSD data
                on HDDs. Document the rationale for deploying Ceph in this
                way and the actual installation procedure for several hardware
                configurations.  

3) TOPIC: Recommended settings and procedure for applying
   EXPLANATION: There needs to a well documented procedure which details for 
                users how to apply Ceph settings. Currently users can use
                ceph-ansible, 'ceph tell/inject', directly edit ceph.conf'.
                Determine best practices, verify procedure and document.
                Provide examples for several scenarios.
 
4) TOPIC: Monitoring and controlling GC rate
   EXPLANATION: RGW garbage collection activity can adversely impact client I/O
                performance. RGW garbage collection statistics are not directly
                available to users. An explaination of GC should be provided
                along with procedures which guide users how to monitor and 
                tune RGW garbage collection.

5) TOPIC: Procedure for creating well-configured RGW pools
   EXPLANATION: RGW client I/O performance is dependent on pool settings.
                Explain the relevant parameters and document ideal pool
                creation parameters for several RGW customer use cases.

I will be opening BZs for each of these.

- John


Note You need to log in before you can comment on or make changes to this bug.