Bug 2219524 - [RFE] Raise a Warning when an EC pool is created with failure_domain as OSD
Summary: [RFE] Raise a Warning when an EC pool is created with failure_domain as OSD
Keywords:
Status: NEW
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 6.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 6.1z2
Assignee: Radoslaw Zarzynski
QA Contact: Pawan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-07-04 06:54 UTC by Pawan
Modified: 2023-07-14 10:36 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-6958 0 None None None 2023-07-04 06:57:43 UTC

Description Pawan 2023-07-04 06:54:00 UTC
Description of problem:
With the new provision of deploying EC pools ( 4+2, 4+3 ) on a 4 node cluster, Custom CRUSH rules become very important, without which, All the PGs could be placed on OSDs from same host, and losing a host might trigger data loss.


To tackle this, we need Raise a Warning when an EC pool is created with failure_domain as OSD, Warning the user of potential case of Data loss scenario.

Without Custom CRUSH rule, if a 4+2 pool is created on a 4 node cluster with failure domain OSD, we have observed that sometimes upto 4 OSDs from same host is picked for an acting set. If this node goes down, the data recovery is not possible.

We should have a warning generated when such EC profile based pools are created without Custom CRUSH rules.


Version-Release number of selected component (if applicable):


How reproducible:
always

Steps to Reproduce:
1. Deploy a 4 node RHCS cluster.
2. Create a 4+2 EC rule, with failure domain as OSD.
3. Create a pool, observe PG placement. 


Actual results:
No warnings generated for above scenario

Expected results:
Warnings to be generated for the issues.

Additional info:


Note You need to log in before you can comment on or make changes to this bug.