Bug 1474274
Summary: | Pod keeps in ContainerCreating status when set invalid value in pod bandwidth | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Yan Du <yadu> |
Component: | Networking | Assignee: | Ivan Chavero <ichavero> |
Status: | CLOSED UPSTREAM | QA Contact: | Meng Bo <bmeng> |
Severity: | low | Docs Contact: | |
Priority: | medium | ||
Version: | 3.6.0 | CC: | akostadi, aos-bugs, decarr, ichavero |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-11-07 06:37:51 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Yan Du
2017-07-24 09:30:21 UTC
invalid values should be caught in validation, not at runtime. Derek, makes sense to me. Just presently we need to make sure that user receives some feedback. Even admins can have trouble diagnosing such issues when they don't expect what the trouble could be. I don't know if `ingress-bandwidth` is the only annotation that can have this problem. IMO we need to be sure to send feedback for any post-validation issues now and in the future. While I agree that we shouldn't have post-validation issues, we obviously have. And new features can introduce such at any time. Implementing a way for this to be propagated back to the user is ultimate for a reasonable UX. You can always send back a new event specific invalid bandwidth settings. Piggybacking on the FailedSync event is not ideal. I think the FailedSync event should go away honestly as it means nothing to a user. InvalidBandwidth events is much more meaningful. Current version of openshift does not have this problem [root@localhost origin]# oc get all NAME READY STATUS RESTARTS AGE po/iperf 1/1 Running 0 9m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 11m [root@localhost origin]# oc version oc v3.7.0-alpha.1+994a5a6-244 kubernetes v1.7.0+695f48a16f features: Basic-Auth GSSAPI Kerberos SPNEGO Server https://192.168.1.69:8443 openshift v3.7.0-alpha.0+66c7f6c-430-dirty kubernetes v1.7.0+695f48a16f Feel free to reopen this bug if the problem persists. openshift v3.7.7 kubernetes v1.7.6+a08f5eeb62 I still could reproduce this issue on latest OCP 3.7 # oc get all NAME READY STATUS RESTARTS AGE po/iperf 0/1 ContainerCreating 0 21m @Ivan are you using invalid value in pod bandwidth? The issue only could be reproduced when using invalid pod bandwidth eg: { "kind": "Pod", "apiVersion":"v1", "metadata": { "name": "iperf", "annotations": { "kubernetes.io/egress-bandwidth": "-10M", "kubernetes.io/ingress-bandwidth": "-3M" } }, "spec": { "containers": [{ "name": "iperf", "image": "yadu/hello-openshift-iperf" }] } } The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days |