This bug is a continuation to bug 1813656 Description of problem (please be detailed as possible and provide log snippests): When creating a Kubernetes Pool via the OCS web UI or YAML, there is no limit on the amount of pods the user can create. Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? No Is there any workaround available to the best of your knowledge? Create no more than 20 storage nodes Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Actual results: creation is possible and goes through Expected results: creation is blocked
YAML can't be enforced, for the UI I'm moving it to the appropriate component.
This validation should be done in the backend. Like it happens for any of the CRD's in k8s. We don't validate things in UI. If backend throws and error we will be able to show it in UI. @vineet - Is already working on the admission controller for OCS. We can ask him to accommodate this validation.
I agree with you but that's not the OpenShift UI wants to go. These limits keep on changing as our products evolve & there is always a mess around that on backend & UI not in sync(We have faced this problem in OpenShift 3.xx) a lot & multiple bugs been raised for the limits, not in sync. So with movement to OCP 4.x the idealogy of maintaining multiple sources is not appreciated & when we have the power CLI(As users don't get these error indications with k8s CLI) & mechanism to provide the errors. Moreover, OpenShift user is educated to do that as this is the same experience they get on each page they go to.
I understand that we don't have the power to change , that's ok :) We will add a check in the operator as well. But multiple sources can be handled by the UI (who's a client) asking the BE to get the configuration/limitation/capabilities etc. this way you only maintain a single place and the rest are dynamic. Also, even if the user is educated, still think its a bad user experience to allow him to do something we know would fail.... you don't let him put in an email in a non email format just to fail on the save right ? same thing :) Again, I understand we don't have the power to change.
(In reply to Nimrod Becker from comment #8) > I understand that we don't have the power to change , that's ok :) We will > add a check in the operator as well. I agree and understand what you are saying. But As you said we don't have the power to change it. > > But multiple sources can be handled by the UI (who's a client) asking the BE > to get the configuration/limitation/capabilities etc. this way you only > maintain a single place and the rest are dynamic. In our case, this might be possible in our case but it's not possible with EVERY use case. if there is a regex written for how a Name of k8s resource but doesn't mean that backend will send a regex from a configuration so UI can handle it. There can be hundreds/thousands of parameters in k8s resources(thinking of the generosity if k8s) & if we start doing validation for each one of them in UI code, it's going to be tuff to manage code itself. I think what we are talking about is configuration based UI & errors which is very hard to achieve for any platform like OpenShift from my experience.
Missing devel-ack.
I tested with 4.5.0-0.nightly-2020-08-27-110054. The backing store with volume size 15 GiB was accepted by the UI. 16 GiB should be the minimal size according to https://bugzilla.redhat.com/show_bug.cgi?id=1813656.
Created attachment 1712838 [details] Screencast of backing store creation with volume size 15
Sorry, this BZ is about number of volumes only, so I'll move it to Verified and open another one.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Container Storage 4.5.0 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:3754