Bug 2151955

Summary: When adding capacity new nodes do not get label openshift-storage=true
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: guy chen <guchen>
Component: unclassifiedAssignee: umanga <uchapaga>
Status: CLOSED WONTFIX QA Contact: Elad <ebenahar>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 4.11CC: bniver, ocs-bugs, odf-bz-bot, sostapov, tmuthami, uchapaga
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-02-07 05:05:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description guy chen 2022-12-08 17:25:53 UTC
Description of problem (please be detailed as possible and provide log
snippests):

I have created 2 local volume set on 2 different group of servers.
I have installed ODF with 1 of the local volume set.
After this, I have tried to add the second local volume set - with the UI console add capacity option to the Storage System.
The Storage System fail expending and all the new ceph pods failed with errors.
The issue was when expending the system the ODF did not label the new servers that were added (cluster.ocs.openshift.io/openshift-storage=true), this caused the ceph pods to continuously fail on node selector, once added the label it was solved.

Version of all relevant components (if applicable):
4.11.4

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
There is a workaround

Is there any workaround available to the best of your knowledge?
yes

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
3

Can this issue reproducible?
yes

Can this issue reproduce from the UI?
yes

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.
2.
3.


Actual results:


Expected results:


Additional info:

Comment 2 guy chen 2022-12-08 17:37:14 UTC
Steps to Reproduce:
1.Install OpenShift 4.11.17
2.Install local storage operator
3.Create 1 local volume set named A with 3 nodes
4.Create a second local volume set named B with different 3 nodes
5.Install ODF operator with locale storage system on top of local volume set A
6.Enter the console and go to locale storage system
7.Press add capacity
8.Choose local volume set B

Actual results:
local volume set B is not added to the storage system, all new ceph pod failed because cluster.ocs.openshift.io/openshift-storage=true label was not added to the nodes of local volume set B

Expected results:
local volume set B should be added

Comment 9 umanga 2023-02-07 05:05:14 UTC
Adding new nodes is not in the scope of ODF. When new nodes are added to the cluster, ODF does not act on it.
We can't assume why the nodes were added. If it is added to be used by ODF, we recommend to manually add
the label `cluster.ocs.openshift.io/openshift-storage=true`. Only user can make this decision.

This is documented here: https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.12/html/scaling_storage/scaling_storage_of_bare_metal_openshift_data_foundation_cluster#scaling_out_storage_capacity_on_a_bare_metal_cluster

This is not an issue during new installs because ODF UI allows users to select nodes to be used for ODF and
labels these nodes on behalf of the user.

Closing this as WONTFIX.