Bug 1332482 - It's better to notice user when requesting storage over the capacity limitation in OSO during PVC created
Summary: It's better to notice user when requesting storage over the capacity limitati...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.2.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Erin Boyd
QA Contact: zhaliu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-03 10:12 UTC by mdong
Modified: 2017-03-08 18:43 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-01-18 12:40:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:0066 0 normal SHIPPED_LIVE Red Hat OpenShift Container Platform 3.4 RPM Release Advisory 2017-01-18 17:23:26 UTC

Description mdong 2016-05-03 10:12:28 UTC
Description of problem:
It's not user friendly that when requesting storage over the capacity limitation(1GB) in OSO during persistentvolumeclaim created, the user doesn't get any notification or warning message, only pvc status is pending.  

Version-Release number of selected component (if applicable):
dev-preview-int

How reproducible:
Always


Steps to Reproduce:
1. Create a project named "test"
2. Create pvc and check pvc status by following command:
[dongm@dhcp-136-41 tmp]$ cat 5.yaml 
apiVersion: "v1"
kind: "PersistentVolumeClaim"
metadata:
  name: "claim-over-limit"
spec:
  accessModes:
    - "ReadWriteOnce"
  resources:
    requests:
      storage: "1.1Gi"


[dongm@dhcp-136-41 tmp]$ oc create -f 5.yaml 
persistentvolumeclaim "claim-over-limit" created
[dongm@dhcp-136-41 tmp]$ oc get pvc
NAME               STATUS    VOLUME         CAPACITY   ACCESSMODES   AGE
claim-over-limit   Pending                                           5s
claim4             Bound     pv-aws-exfj7   500M       RWO           1m

[dongm@dhcp-136-41 tmp]$ oc describe pvc claim-over-limit
Name:		claim-over-limit
Namespace:	test
Status:		Pending
Volume:		
Labels:		<none>
Capacity:	
Access Modes:

Actual results:
persistentvolumeclaim created successfully, but the status is pending

Expected results:
the user is better to get any notification or warning message when requesting storage over the capacity limitation(1GB).

Additional info:

Comment 2 Mark Turansky 2016-05-25 12:33:06 UTC
This is, essentially, an RFE as there is no means today to communicate *why* a PVC might be pending.  There are also many possible reasons why a PVC remains Pending, from mismatched access modes to exceeded capacity to simply no PVs to bind to.

I believe this is a UX issue for the storage team. Assigning to Erin Boyd.

Comment 3 Dan Mace 2016-05-25 12:58:06 UTC
This seems like it's best resolved by integrating the Kube dynamic provisioner with quota.

Comment 4 Bradley Childs 2016-08-08 20:34:06 UTC
https://github.com/kubernetes/kubernetes/pull/30145

Comment 7 Erin Boyd 2016-10-26 19:49:42 UTC
This fix should be in this merge that happened 14 days ago:
https://github.com/kubernetes/kubernetes/pull/30145

please retest

Comment 8 Troy Dawson 2016-10-28 19:50:23 UTC
This has been merged into ose and is in OSE v3.4.0.17 or newer.

Comment 10 zhaliu 2016-11-10 05:40:00 UTC
This bug can't be verified now,because the limitrange feature is not available in INT.
https://bugzilla.redhat.com/show_bug.cgi?id=1391842

There has been a test case covered this bug.

Comment 12 zhaliu 2016-11-14 02:58:28 UTC
Verified in OCP 3.4, it works well.When creating a pvc over the limit,I find it can't be created and there is a correct prompt.

Comment 14 errata-xmlrpc 2017-01-18 12:40:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:0066


Note You need to log in before you can comment on or make changes to this bug.