Bug 1973150

Summary: [DDF] After adding new hdd to all cluster nodes and running add-osd ansible it decides that all existing osd containers
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Direct Docs Feedback <ddf-bot>
Component: Ceph-AnsibleAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Ameena Suhani S H <amsyedha>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.2CC: aschoen, ceph-eng-bugs, gabrioux, gmeno, kdreyer, mhull, nthomas, ykaul
Target Milestone: ---   
Target Release: 4.3z1   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 2009070 (view as bug list) Environment:
Last Closed: 2021-09-29 21:19:20 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1966534, 2009070    

Description Direct Docs Feedback 2021-06-17 10:41:51 UTC
After adding new hdd to all cluster nodes and running add-osd ansible it decides that all existing osd containers must be restarted. After first restarts it forces pg`s in to degraded state that takes 2~ days to complete and add-osd fails. This problem is observed in 4.0/4.1/4.2. 

Reported by: kricud180

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html/operations_guide/managing-the-storage-cluster-size#annotations:64e6cfa3-f33a-4400-bc66-2fc27bbed5e3

Comment 1 RHEL Program Management 2021-06-17 10:41:56 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.