Bug 1938078

Summary: [RFE] UI deployer would search for a pre-existing configmap and if present set the parameters to "ignore"(continuation from bug 1914475#c22)
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Neha Berry <nberry>
Component: management-consoleAssignee: Sanjal Katiyar <skatiyar>
Status: CLOSED WONTFIX QA Contact: Elad <ebenahar>
Severity: low Docs Contact:
Priority: low    
Version: 4.7CC: etamir, gshanmug, jefbrown, jrivera, madam, muagarwa, nthomas, ocs-bugs, odf-bz-bot, omitrani, sostapov, ygalanti
Target Milestone: ---Keywords: FutureFeature
Target Release: ---Flags: etamir: needinfo+
afrahman: needinfo? (ygalanti)
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-05-25 06:27:45 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Neha Berry 2021-03-12 05:50:14 UTC
Description of problem (please be detailed as possible and provide log
snippests):
============================================================================
Raising this bug as per the comments added by JC Lopez in Bug 1914475#c22

Rook offers a configmap to pass specific parameters into the Ceph configuration: cm/rook-config-override

Details from Bug 1914475#c20: With the fix in Bug 1914475, For CLI based deployments, if one sets following in storagecluster.yaml, then any custom params in manually created configmap (oc get cm rook-config-override -o yaml) will not be overriden and stays intact

cephConfig:
      reconcileStrategy: ignore


But the same is not true for UI based installs, 


Version of all relevant components (if applicable):
========================================================
OCP 4.7 and OCS 4.7


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
========================================================================
For POCs with UI install

Is there any workaround available to the best of your knowledge?
========================================================
CLI install

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
===============================================================
4

Can this issue reproducible?
============================
Yes


Can this issue reproduce from the UI?
=============================================
Yes

If this is a regression, please provide more details to justify this:
===================================================================
No

Steps to Reproduce:
========================


1. Deploy OCs Operator
2. Create a custom cm/rook-ceph-config (See Bug 1914475#c21 for custom config examples)
3. Create storage cluster from UI

Actual results:
=======================
Deployment through UI overwrites my custom config in configmap.

So being successfull requires setting the following in the storagecluster.yaml
  managedResources:
    cephConfig:
      reconcileStrategy: ignore


Expected results:
========================
UI deployer would search for a pre-existing CM and if present set the parameters to "ignore" automatically or merge the cm with the default generated CM


Additional info:
==========================
Comment from JC Lopez: 

Deployment through UI overwrites my custom config.

So being successfull requires setting the following in the storagecluster.yaml
  managedResources:
    cephConfig:
      reconcileStrategy: ignore

This is a minor detail for me as of now as this is only for specific cases

For the future it would be great if the UI deployer would search for a pre-existing CM and if present sets the parameters to "ignore" automatically or merge the cm with the default generated CM


Would recommend moving to verify for OCS 4.7 and consider above evolution suggestion for OCS 4.8

Comment 2 Neha Berry 2021-03-12 05:53:34 UTC
*** Bug 1938079 has been marked as a duplicate of this bug. ***

Comment 3 Nishanth Thomas 2021-03-16 10:01:26 UTC
@etamir , thoughts?

Comment 6 Jean-Charles Lopez 2021-03-22 21:23:10 UTC
We have two options on how we can do it from the UI perpective:
1) External CLI creation of the config then go back to UI
2) Build an option into the UI so the UI creates the config

For now the UI simply overwrites what we have set into the config.

Will retest with RC when it comes to confirm behavior.

To the best of my knowledge as long as we do not have the easy 2 worker node deployment option from the UI and the otehr things that we have discussed here and there this customization will mainly be needed from the CLI anyway.

Comment 9 gowtham 2021-04-19 06:31:02 UTC
I agree with Eran's point. Also, What is the case if backend configuration is created after the cluster creation from UI? the issue still remains. I don't think UI is a correct service to keep tracking of backend configurations and updating YAML. 

As Eran suggested once deployment started in CLI, it is better to follow the full CLI deployment. It is still possible to change the flag in YAML using CLI.

@nberry

Comment 14 Mudit Agarwal 2021-09-24 16:41:02 UTC
Webhooks are not part of 4.9

Comment 18 Jose A. Rivera 2022-01-20 15:57:40 UTC
The following line says that if we do find any extant ConfigMap and it does not have our expected configuration or OwnerReference we override the contents: https://github.com/red-hat-storage/ocs-operator/blob/main/controllers/storagecluster/cephconfig.go#L96

So I'll just confirm JC's initial request that the proper way to fix this would be to have the UI have an option to set the following in its StorageCluster definition:

cephConfig:
      reconcileStrategy: ignore

Still, this is mostly a nice-to-have for deployment scenarios that fall outside our officially supported use cases. It would still be a *good* thing to have, but not required for ODF 4.10 at this point. Moving to ODF 4.11 and changing components to the management console.