Bug 2109101

Summary: [GSS][RFE] Add ability to configure Noobaa-endpoint pod in HA (update Pod Topology Spread Constraints)
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Priya Pandey <prpandey>
Component: Multi-Cloud Object GatewayAssignee: Naveen Paul <napaul>
Status: ON_QA --- QA Contact: krishnaram Karthick <kramdoss>
Severity: low Docs Contact:
Priority: unspecified    
Version: 4.10CC: dzaken, nbecker, odf-bz-bot
Target Milestone: ---Keywords: FutureFeature
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Feature: Take advantage of Pod Topology Spread Constraints to make sure endpoint autoscaling (and auto downscaling) is balanced across all the OCP nodes and endpoint not end up all on the same node Reason: In order to utilize resources in the best way, endpoints should be spread across the different nodes and not all running on the same one Result:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Priya Pandey 2022-07-20 12:12:57 UTC
Description of problem (please be detailed as possible and provide log
snippets):

- The Noobaa-endpoint pod should be configurable as HA.

- We have HPA configured for noobaa-endpoint deployment but the parameters can't be configured.

- The MINPODS and MAXPODS values are set to 1 and 2. If we try to change these values it gets reconciled.

- The requirement is the ability to configure the noobaa-endpoint pod in high availability.

- The MINPODS and MAXPODS can be configured accordingly so that during any activity on the cluster the noobaa services won't be impacted.

- We can additionally add the pod-affinity on it so that no two endpoint pods are running on one node.

- In this way we can maintain the HA for noobaa-endpoint.


Version of all relevant components (if applicable):

v4.10

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

- During upgrades the pods get restarted, if noobaa-endpoint is configured in HA the services won't be impacted during the pod restarts.

Is there any workaround available to the best of your knowledge?
N/A

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
3

Can this issue reproducible?
Yes

Can this issue reproduce from the UI?
Yes

If this is a regression, please provide more details to justify this:
No

Steps to Reproduce:
1. Edit the HPA with the required values.
2. Verify the changes in HPA. 
3. The values of HPA get reconciled.


Actual results:

- The parameters of HPA get reconciled. 

Expected results:

- The parameter of HPA should be configurable.

Additional info:

- In the next comments.