Bug 1835636 - [OCS YAML + UI] Kubernetes Pool node limit isn't communicated/enforced
Summary: [OCS YAML + UI] Kubernetes Pool node limit isn't communicated/enforced
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Container Storage
Classification: Red Hat Storage
Component: Multi-Cloud Object Gateway
Version: 4.5
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: ---
: OCS 4.5.0
Assignee: Evgeniy Belyi
QA Contact: Elena Bondarenko
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-05-14 08:41 UTC by aberner
Modified: 2020-09-15 10:17 UTC (History)
13 users (show)

Fixed In Version: 4.5.0-444.ci
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-15 10:17:01 UTC
Embargoed:
nbecker: needinfo+


Attachments (Terms of Use)
Screencast of backing store creation with volume size 15 (474.44 KB, video/webm)
2020-08-27 14:56 UTC, Elena Bondarenko
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github noobaa noobaa-operator pull 311 0 None closed Max num volumes PV pool fix 2020-08-27 10:02:06 UTC
Github openshift console pull 5508 0 None closed Bug 1835636: Add limit for PVC for BS creation 2020-08-27 10:02:04 UTC
Red Hat Product Errata RHBA-2020:3754 0 None None None 2020-09-15 10:17:23 UTC

Description aberner 2020-05-14 08:41:43 UTC
This bug is a continuation to bug 1813656

Description of problem (please be detailed as possible and provide log
snippests):
When creating a Kubernetes Pool via the OCS web UI or YAML, there is no limit on the amount of pods the user can create. 

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
No

Is there any workaround available to the best of your knowledge?
Create no more than 20 storage nodes

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1


Actual results:
creation is possible and goes through


Expected results:
creation is blocked

Comment 2 Nimrod Becker 2020-05-14 08:44:15 UTC
YAML can't be enforced, for the UI I'm moving it to the appropriate component.

Comment 3 Ankush Behl 2020-05-21 10:32:41 UTC
This validation should be done in the backend. Like it happens for any of the CRD's in k8s.
We don't validate things in UI. If backend throws and error we will be able to show it in UI. 


@vineet - Is already working on the admission controller for OCS. We can ask him to accommodate this validation.

Comment 7 Ankush Behl 2020-05-21 16:22:30 UTC
I agree with you but that's not the OpenShift UI wants to go. These limits keep on changing as our products evolve & there is always a mess around that on backend & UI not in sync(We have faced this problem in OpenShift 3.xx) a lot & multiple bugs been raised for the limits, not in sync.

So with movement to OCP 4.x the idealogy of maintaining multiple sources is not appreciated & when we have the power CLI(As users don't get these error indications with k8s CLI) & mechanism to provide the errors.

Moreover, OpenShift user is educated to do that as this is the same experience they get on each page they go to.

Comment 8 Nimrod Becker 2020-05-21 16:26:13 UTC
I understand that we don't have the power to change , that's ok :) We will add a check in the operator as well.

But multiple sources can be handled by the UI (who's a client) asking the BE to get the configuration/limitation/capabilities etc. this way you only maintain a single place and the rest are dynamic.

Also, even if the user is educated, still think its a bad user experience to allow him to do something we know would fail.... you don't let him put in an email in a non email format just to fail on the save right ? same thing :)

Again, I understand we don't have the power to change.

Comment 9 Ankush Behl 2020-05-21 16:44:43 UTC
(In reply to Nimrod Becker from comment #8)
> I understand that we don't have the power to change , that's ok :) We will
> add a check in the operator as well.
I agree and understand what you are saying. But As you said we don't have the power to change it.
> 
> But multiple sources can be handled by the UI (who's a client) asking the BE
> to get the configuration/limitation/capabilities etc. this way you only
> maintain a single place and the rest are dynamic.

In our case, this might be possible in our case but it's not possible with EVERY use case.
if there is a regex written for how a Name of k8s resource but doesn't mean that backend
will send a regex from a configuration so UI can handle it. 

There can be hundreds/thousands of parameters in k8s resources(thinking of the generosity if k8s) 
& if we start doing validation for each one of them in UI code, it's going to be tuff to manage code itself. 

I think what we are talking about is configuration based UI & errors which is very hard to achieve for any platform like OpenShift from my experience.

Comment 10 Yaniv Kaul 2020-06-24 15:41:16 UTC
Missing devel-ack.

Comment 14 Elena Bondarenko 2020-08-27 14:55:24 UTC
I tested with 4.5.0-0.nightly-2020-08-27-110054. The backing store with volume size 15 GiB was accepted by the UI. 16 GiB should be the minimal size according to https://bugzilla.redhat.com/show_bug.cgi?id=1813656.

Comment 15 Elena Bondarenko 2020-08-27 14:56:56 UTC
Created attachment 1712838 [details]
Screencast of backing store creation with volume size 15

Comment 16 Elena Bondarenko 2020-08-27 15:01:15 UTC
Sorry, this BZ is about number of volumes only, so I'll move it to Verified and open another one.

Comment 18 errata-xmlrpc 2020-09-15 10:17:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Container Storage 4.5.0 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:3754


Note You need to log in before you can comment on or make changes to this bug.