Seeing this https://openshift-gce-devel.appspot.com/build/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade/1146 in recent runs which looks like a failure to allocate LBs. This could be a bug, rate limits, the controller being down, etc. Needs triage. May 3 15:34:41.108: Timed out waiting for service "service-test" to have a load balancer github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework.(*ServiceTestJig).waitForConditionOrFail(0xc002ba92c0, 0xc0029edfc0, 0x1f, 0xc0023f01a0, 0xc, 0x1176592e000, 0x4e4686e, 0x14, 0x50911d0, 0x0) /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/service_util.go:589 +0x1e9 github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework.(*ServiceTestJig).WaitForLoadBalancerOrFail(0xc002ba92c0, 0xc0029edfc0, 0x1f, 0xc0023f01a0, 0xc, 0x1176592e000, 0x25) /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/service_util.go:548 +0x15d github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/upgrades.(*ServiceUpgradeTest).Setup(0xc001e585d0, 0xc0023a42c0) /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/upgrades/services.go:52 +0x195 github.com/openshift/origin/test/e2e/upgrade.(*chaosMonkeyAdapter).Test(0xc002475840, 0xc002142960) /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/e2e/upgrade/upgrade.go:165 +0x180 github.com/openshift/origin/test/e2e/upgrade.(*chaosMonkeyAdapter).Test-fm(0xc002142960) /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/e2e/upgrade/upgrade.go:245 +0x34 github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey.(*chaosmonkey).Do.func1(0xc002142960, 0xc0027b5d50) /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:89 +0x76 created by github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey.(*chaosmonkey).Do /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:86 +0xa7 Spawned from https://bugzilla.redhat.com/show_bug.cgi?id=1703878 because it looks different.
``` 2174 I0503 15:55:36.365583 1 aws_loadbalancer.go:956] Creating load balancer for e2e-tests-service-upgrade-5m5v8/service-test with name: a2c8c0f5d6db611e9a71c0eb651e2eef 2175 E0503 15:55:37.380339 1 service_controller.go:219] error processing service e2e-tests-service-upgrade-5m5v8/service-test (will retry): failed to ensure load balancer for service e2e-tests-service-upgrade-5m5v8/service-test: Too ManyLoadBalancers: Exceeded quota of account 460538899914 2176 status code: 400, request id: e47e5fea-6dbb-11e9-8e52-55a18a5938ed 2177 I0503 15:55:37.380396 1 event.go:221] Event(v1.ObjectReference{Kind:"Service", Namespace:"e2e-tests-service-upgrade-5m5v8", Name:"service-test", UID:"2c8c0f5d-6db6-11e9-a71c-0eb651e2eefe", APIVersion:"v1", ResourceVersion:"1549 3", FieldPath:""}): type: 'Warning' reason: 'CreatingLoadBalancerFailed' Error creating load balancer (will retry): failed to ensure load balancer for service e2e-tests-service-upgrade-5m5v8/service-test: TooManyLoadBalancers: Exceeded quota of account 460538899914 ```
Seems service limits is the root cause. Closing it! 1 event.go:221] Event(v1.ObjectReference{Kind:"Service", Namespace:"e2e-tests-service-upgrade-5m5v8", Name:"service-test", UID:"2c8c0f5d-6db6-11e9-a71c-0eb651e2eefe", APIVersion:"v1", ResourceVersion:"15493", FieldPath:""}): type: 'Warning' reason: 'CreatingLoadBalancerFailed' Error creating load balancer (will retry): failed to ensure load balancer for service e2e-tests-service-upgrade-5m5v8/service-test: TooManyLoadBalancers: Exceeded quota of account 460538899914
In some failures: I0506 18:05:50.152247 1 garbagecollector.go:409] processing item [v1/clusteroperator, namespace: , name: storage, uid: bc21f1da-7019-11e9-a9c2-12a502e58680] W0506 18:05:54.103763 1 retry_handler.go:99] Got RequestLimitExceeded error on AWS request (ec2::DeleteSecurityGroup) W0506 18:05:54.991005 1 retry_handler.go:55] Inserting delay before AWS request (ec2::DeleteSecurityGroup) to avoid RequestLimitExceeded: 6s W0506 18:06:01.015799 1 retry_handler.go:99] Got RequestLimitExceeded error on AWS request (ec2::DeleteSecurityGroup) W0506 18:06:02.016126 1 retry_handler.go:55] Inserting delay before AWS request (ec2::DeleteSecurityGroup) to avoid RequestLimitExceeded: 11s