Bug 1507595
Summary: | Plan can't restore to the previous good state or update to another acceptable plan | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Qixuan Wang <qixuan.wang> |
Component: | Service Broker | Assignee: | Jeff Peeler <jpeeler> |
Status: | CLOSED ERRATA | QA Contact: | Zihan Tang <zitang> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 3.7.0 | CC: | aos-bugs, chezhang, mstaeble, pmorie, smunilla, wsun, zitang |
Target Milestone: | --- | ||
Target Release: | 3.9.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
There were several problems related to updates: spec changes for instances were blocked even if there wasn't an on going operation, deleting a service instance that was updated to an invalid service plan would cause a crash, and instances weren't updated properly if a previous update had failed.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2018-12-13 19:26:48 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Qixuan Wang
2017-10-30 16:27:22 UTC
https://github.com/kubernetes-incubator/service-catalog/issues/1487 tracks the issue with updating a ServiceInstance after a failed update. https://github.com/kubernetes-incubator/service-catalog/issues/1499 tracks controller-manager crashing when deleting a ServiceInstance with a plan name of a non-existent plan. Upstream PRs: https://github.com/kubernetes-incubator/service-catalog/pull/1501 https://github.com/kubernetes-incubator/service-catalog/pull/1502 Fixed in origin with: https://github.com/openshift/origin/pull/17166 Tested on OCP(openshift v3.7.0-0.196.0, kubernetes v1.7.6+a08f5eeb62, etcd 3.2.8, brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/ose-service-catalog:v3.7.0-0.196.0.0, brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/ose-ansible-service-broker:v3.7.0-0.196.0.0) The following 5,6 are not allowed to downgrade, so plan can rollback. That's correct. However, 2,4 are preventing restore from an invalid plan, is this expected? 1. [Edit] spec: dev->dev-123 [Describe] Message: The instance references a ClusterServicePlan that does not exist. spec: dev-123, status:dev 2. [Edit] spec: dev-123->dev/prod # serviceinstances "rh-rhscl-postgresql-apb-mh5s2" was not valid: # * spec: Forbidden: Another update for this service instance is in progress [root@host-172-16-120-8 ~]# oc edit serviceinstance rh-rhscl-postgresql-apb-mh5s2 error: serviceinstances "rh-rhscl-postgresql-apb-mh5s2" is invalid A copy of your changes has been stored to "/tmp/oc-edit-huczk.yaml" error: Edit cancelled, no valid changes were saved. [Describe] Message: The instance references a ClusterServicePlan that does not exist. spec: dev-123, status:dev 3. [Edit] spec: prod->prod-456 [Describe] Message: The instance references a ClusterServicePlan that does not exist. spec: prod-456, status:prod 4. [Edit] spec: prod-456->dev/prod # serviceinstances "rh-rhscl-postgresql-apb-xq2ns" was not valid: # * spec: Forbidden: Another update for this service instance is in progress [root@host-172-16-120-8 ~]# oc edit serviceinstance rh-rhscl-postgresql-apb-xq2ns error: serviceinstances "rh-rhscl-postgresql-apb-xq2ns" is invalid A copy of your changes has been stored to "/tmp/oc-edit-zrq5w.yaml" error: Edit cancelled, no valid changes were saved. Downgrade and rollback 5. [Edit] spec: prod->dev [Describe] Message: plan update not possible, spec:dev, status:prod 6. [Edit] spec: dev->prod [Describe] Message: The instance is being updated asynchronously, spec:prod, status:prod The failure of 2 and 4 is not expected. This bug was unfortunately not addressed completely. The failures are captured upstream in https://github.com/kubernetes-incubator/service-catalog/issues/1533. Version-Release number of selected component (if applicable): openshift v3.9.0-0.19.0 kubernetes v1.9.0-beta1 etcd 3.2.8 ose-ansible-service-broker:v3.9 ose-service-catalog:v3.9 Now we support plan rollback from a bad state (dev-123 -> dev, or prod456 -> prod) and downgrade (prod -> dev). I found plan can't be updated from an nonexistent one to another valid plan, for example: 1) dev-123 -> prod (x) -> dev (x) 2) prod-456 -> dev (x) -> prod (x) I'm not finding the previous comment to be true with the latest code: $ kubectl get serviceinstances -n test-ns -o yaml apiVersion: v1 items: - apiVersion: servicecatalog.k8s.io/v1beta1 kind: ServiceInstance metadata: creationTimestamp: 2018-01-22T17:26:27Z finalizers: - kubernetes-incubator/service-catalog generation: 1 name: ups-instance namespace: test-ns resourceVersion: "816" selfLink: /apis/servicecatalog.k8s.io/v1beta1/namespaces/test-ns/serviceinstances/ups-instance uid: 60b76eec-ff99-11e7-9b7f-0242ac110005 spec: clusterServiceClassExternalName: user-provided-service clusterServicePlanExternalName: invalid-default externalID: 2542f01d-751b-45a5-ba5c-5d0986c42f08 parameters: param-1: value-1 param-2: value-2 updateRequests: 0 status: asyncOpInProgress: false conditions: - lastTransitionTime: 2018-01-22T17:26:27Z message: 'The instance references a ClusterServicePlan that does not exist. References a non-existent ClusterServicePlan (K8S: "" ExternalName: "invalid-default") on ClusterServiceClass (K8S: "4f6e6cf6-ffdd-425f-a2c7-3c9258ad2468" ExternalName: "user-provided-service") or there is more than one (found: 0)' reason: ReferencesNonexistentServicePlan status: "False" type: Ready deprovisionStatus: NotRequired orphanMitigationInProgress: false reconciledGeneration: 0 kind: List metadata: resourceVersion: "" selfLink: "" Next edit to "default" plan. $ kubectl get serviceinstances -n test-ns -o yaml apiVersion: v1 items: - apiVersion: servicecatalog.k8s.io/v1beta1 kind: ServiceInstance metadata: creationTimestamp: 2018-01-22T17:26:27Z finalizers: - kubernetes-incubator/service-catalog generation: 2 name: ups-instance namespace: test-ns resourceVersion: "821" selfLink: /apis/servicecatalog.k8s.io/v1beta1/namespaces/test-ns/serviceinstances/ups-instance uid: 60b76eec-ff99-11e7-9b7f-0242ac110005 spec: clusterServiceClassExternalName: user-provided-service clusterServiceClassRef: name: 4f6e6cf6-ffdd-425f-a2c7-3c9258ad2468 clusterServicePlanExternalName: default clusterServicePlanRef: name: 86064792-7ea2-467b-af93-ac9694d96d52 externalID: 2542f01d-751b-45a5-ba5c-5d0986c42f08 parameters: param-1: value-1 param-2: value-2 updateRequests: 0 status: asyncOpInProgress: false conditions: - lastTransitionTime: 2018-01-22T17:27:42Z message: The instance was provisioned successfully reason: ProvisionedSuccessfully status: "True" type: Ready deprovisionStatus: Required externalProperties: clusterServicePlanExternalID: 86064792-7ea2-467b-af93-ac9694d96d52 clusterServicePlanExternalName: default parameterChecksum: 4fa544b50ca7a33fe5e8bc0780f1f36aa0c2c7098242db27bc8a3e21f4b4ab55 parameters: param-1: value-1 param-2: value-2 orphanMitigationInProgress: false reconciledGeneration: 2 kind: List metadata: resourceVersion: "" selfLink: "" Will look at confirming with openshift next. Verified using the latest downstream image. openshift v3.9.0-0.41.0 kubernetes v1.9.1+a0ce1bc657 ASB : 1.1.9 ; brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/ose-ansible-service-broker:v3.9 Service-catalog : 0.1.3 brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/ose-service-catalog:v3.9 update instance : dev -> dev123 -> prod prod -> prod123 ->dev This will succeed. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3748 |