Bug 2109101 - [GSS][RFE] Add ability to configure Noobaa-endpoint pod in HA (update Pod Topology Spread Constraints)
Summary: [GSS][RFE] Add ability to configure Noobaa-endpoint pod in HA (update Pod Top...
Keywords:
Status: ON_QA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: Multi-Cloud Object Gateway
Version: 4.10
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: ---
Assignee: Naveen Paul
QA Contact: krishnaram Karthick
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-07-20 12:12 UTC by Priya Pandey
Modified: 2023-08-09 16:49 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Feature: Take advantage of Pod Topology Spread Constraints to make sure endpoint autoscaling (and auto downscaling) is balanced across all the OCP nodes and endpoint not end up all on the same node Reason: In order to utilize resources in the best way, endpoints should be spread across the different nodes and not all running on the same one Result:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github noobaa noobaa-operator pull 1138 0 None open Endpoints topology-spread-constraints added 2023-07-12 06:39:21 UTC
Github noobaa noobaa-operator pull 1174 0 None Merged Backport to 5.14 2023-07-19 09:27:13 UTC

Description Priya Pandey 2022-07-20 12:12:57 UTC
Description of problem (please be detailed as possible and provide log
snippets):

- The Noobaa-endpoint pod should be configurable as HA.

- We have HPA configured for noobaa-endpoint deployment but the parameters can't be configured.

- The MINPODS and MAXPODS values are set to 1 and 2. If we try to change these values it gets reconciled.

- The requirement is the ability to configure the noobaa-endpoint pod in high availability.

- The MINPODS and MAXPODS can be configured accordingly so that during any activity on the cluster the noobaa services won't be impacted.

- We can additionally add the pod-affinity on it so that no two endpoint pods are running on one node.

- In this way we can maintain the HA for noobaa-endpoint.


Version of all relevant components (if applicable):

v4.10

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

- During upgrades the pods get restarted, if noobaa-endpoint is configured in HA the services won't be impacted during the pod restarts.

Is there any workaround available to the best of your knowledge?
N/A

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
3

Can this issue reproducible?
Yes

Can this issue reproduce from the UI?
Yes

If this is a regression, please provide more details to justify this:
No

Steps to Reproduce:
1. Edit the HPA with the required values.
2. Verify the changes in HPA. 
3. The values of HPA get reconciled.


Actual results:

- The parameters of HPA get reconciled. 

Expected results:

- The parameter of HPA should be configurable.

Additional info:

- In the next comments.


Note You need to log in before you can comment on or make changes to this bug.