Bug 1922085

Summary: [RFE] Request to provide an option to specify erasure coded pool as datapool while deploying cephfs using orchestration cli
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Hemanth Kumar <hyelloji>
Component: CephadmAssignee: Adam King <adking>
Status: NEW --- QA Contact: Mohit Bisht <mobisht>
Severity: medium Docs Contact: Karen Norteman <knortema>
Priority: medium    
Version: 5.0CC: agunn, jolmomar, kdreyer, mgowri, saraut, vereddy
Target Milestone: ---Keywords: FutureFeature
Target Release: 8.0   
Hardware: Unspecified   
OS: Unspecified   
URL: https://tracker.ceph.com/issues/50639
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1935644 (view as bug list) Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1935644    

Comment 5 Juan Miguel Olmo 2021-02-24 17:15:13 UTC
Hemanth Kumar: 
I think that you are the best to explain the situation and how to solve it.
As you have exposed perfectly in your "workaround" procedure, there is a way to use EC pools for backing up cephfs. 

So this is not a cephadm issue, It is a cephadm enhancement. It is inside rhel 5.1 ... but i think that it will be nice to explain yout procedure in the downstream documentation

Would you mind to follow the Aron instructions in https://bugzilla.redhat.com/show_bug.cgi?id=1922085#c2 to document this workaround properly in downstream documentation?

Comment 6 Hemanth Kumar 2021-03-05 10:29:05 UTC
created a doc bz to track this issue until the Eng BZ is fixed : https://bugzilla.redhat.com/show_bug.cgi?id=1935644

Comment 7 Sebastian Wagner 2021-05-04 11:47:25 UTC
moved to upstream: https://tracker.ceph.com/issues/50639 . This is going to require some bigger effort.