Bug 2162229

Summary: Customer unable to add storage devices to their ODF cluster
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: bmcmurra
Component: ocs-operatorAssignee: Mudit Agarwal <muagarwa>
Status: CLOSED DUPLICATE QA Contact: Elad <ebenahar>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.10CC: hnallurv, mparida, muagarwa, nigoyal, ocs-bugs, odf-bz-bot, sapillai, tnielsen, uchapaga
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-03-01 17:30:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Comment 4 Santosh Pillai 2023-01-20 13:41:33 UTC
Some observations from the must gather:

- Looking at the age of the pvcs, looks like initially only 6 osds were running and down the line 3 (last three below) more were added.  

openshift-storage   ocs-deviceset-ocs-local-volume-set-0-data-0k9bpg   Bound    local-pv-292ebea6                          150Gi      RWO            ocs-local-volume-set                 83d
openshift-storage   ocs-deviceset-ocs-local-volume-set-0-data-12tvn5   Bound    local-pv-c616e187                          150Gi      RWO            ocs-local-volume-set                 83d
openshift-storage   ocs-deviceset-ocs-local-volume-set-0-data-2ss62v   Bound    local-pv-8c593239                          150Gi      RWO            ocs-local-volume-set                 83d
openshift-storage   ocs-deviceset-ocs-local-volume-set-0-data-3wlmmn   Bound    local-pv-99032d1c                          150Gi      RWO            ocs-local-volume-set                 83d
openshift-storage   ocs-deviceset-ocs-local-volume-set-0-data-4rrhjd   Bound    local-pv-100cb8cf                          150Gi      RWO            ocs-local-volume-set                 83d
openshift-storage   ocs-deviceset-ocs-local-volume-set-0-data-58sr6g   Bound    local-pv-b53226fe                          150Gi      RWO            ocs-local-volume-set                 83d
openshift-storage   ocs-deviceset-ocs-local-volume-set-0-data-6x8c28   Bound    local-pv-ed6d428c                          150Gi      RWO            ocs-local-volume-set                 41d
openshift-storage   ocs-deviceset-ocs-local-volume-set-0-data-7gvjw7   Bound    local-pv-d7b1beff                          150Gi      RWO            ocs-local-volume-set                 40d
openshift-storage   ocs-deviceset-ocs-local-volume-set-0-data-8bq8k8   Bound    local-pv-22b40101                          150Gi      RWO            ocs-local-volume-set                 36d


- Rook-operator-pods seems to have restarted and they don't capture the updates where new osds were added. No previous logs are avaiable. 

- Only plausible reason could be that user added 3 more osds and later reverted the `count` back to 6. That could be the reason for mismatch.

Comment 13 Malay Kumar parida 2023-01-27 05:26:45 UTC
Hey, I am on PTO today. 
Yes, We don't support replica 1 before 4.12 & in 4.12 also only dev preview.
I will be able to take a detailed look only on Monday.

Comment 15 Malay Kumar parida 2023-01-30 07:46:28 UTC
Hey Brandon, I see the must gather that was attached to the case of Jan 18th, I assume there have been further changes to the cluster after that.
Can we get the latest ocs must gather for accurate debugging.
Thanks

Comment 16 Malay Kumar parida 2023-01-30 08:01:11 UTC
Also to see information about the disks that are attached and the localvolumeset, Please attach the ocp must gather also

Comment 23 bmcmurra 2023-03-01 17:30:31 UTC

*** This bug has been marked as a duplicate of bug 2171122 ***