Bug 2215367

Summary: [DDF] This DOES NOT WORK. I cannot keep this thing from automagically adding OSDs all over the place (and messing them
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Direct Docs Feedback <ddf-bot>
Component: DocumentationAssignee: Ranjini M N <rmandyam>
Documentation sub component: DDF QA Contact: Manisha Saini <msaini>
Status: RELEASE_PENDING --- Docs Contact:
Severity: medium    
Priority: unspecified CC: adking, rmandyam, vdas
Version: 5.0   
Target Milestone: ---   
Target Release: 6.1z1   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Direct Docs Feedback 2023-06-15 17:44:47 UTC
This DOES NOT WORK.  I cannot keep this thing from automagically adding OSDs all over the place (and messing them all up).
Why are you so dead set against systematic, staged bring-up.   one step at a time, making sure it is actually WORKING before doing 40000 other things.  As in if 1 OSD failed WHY DID YOU START 60 MORE?????

Reported by: mhall_spirent

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html/operations_guide/management-of-osds-using-the-ceph-orchestrator#annotations:0c61fb6a-b8ad-4b77-b3ae-d1c3d226d090

Comment 1 RHEL Program Management 2023-06-15 17:44:55 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.