Description of problem: Bug 1900989 fixes `oc idle` in 4.6 and 4.7 by annotating a workload's service with the proper idle annotations, in addition to the workloads endpoints, among other things. Clusters upgrading to a cluster version with the new fixes for Bug 1900989 that have idled workloads will run into issues with unidling, since unidling the idled workload will not work without manual user intervention (the service idle annotations are needed for unidling to work going forward). Steps to Reproduce: 1. Idle a workload (ex: run `oc idle` on a service + deployment + route) 2. Upgrade the cluster to a cluster version containing the fixes for Bug 1900989 Actual results: Curling the idled route does not "wake it up". Expected results: Unidling a route after an upgrade should always work without user intervention. Additional info:
Note that the fix for this bug should only be available in 4.6 and 4.7, since any clusters upgrading to 4.8 and beyond would already have the idle annotations mirrored over from 4.6.z/4.7.z (we can shave a couple seconds off of operator start time but not performing the idle annotations check in future releases).
Workaround for customers upgrading with idled workloads to a version of 4.6.z/4.7.z with the new idle changes from Bug 1900989: 0) Wait for upgrade to complete 1) Remove idle annotations from idled endpoints (oc edit ...) note the idled scalable resources and their prior replica count. 2) Manually scale idled scalable resources back up to the desired number of replicas (oc scale ...) 3) Route should now be unidled.
Verified in "4.8.0-0.nightly-2021-02-22-111248" release version. Upgrading a v4.7 cluster to the said payload, the idled routes gets woken up and becomes accessible via curl without any manual interventions: ------ $ oc get all NAME READY STATUS RESTARTS AGE pod/web-server-rc-tb4d6 1/1 Running 0 10 NAME DESIRED CURRENT READY AGE replicationcontroller/web-server-rc 1 1 1 10 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/service-secure ClusterIP 172.30.89.95 <none> 27443/TCP 10m service/service-unsecure ClusterIP 172.30.100.229 <none> 27017/TCP 10 NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD route.route.openshift.io/service-unsecure service-unsecure-test1.apps.aiyengar-oc47rc3-2502.qe.devcluster.openshift.com service-unsecure http None $ oc idle service-unsecure WARNING: idling when network policies are in place may cause connections to bypass network policy entirely The service "test1/service-unsecure" has been marked as idled The service will unidle ReplicationController "test1/web-server-rc" to 1 replicas once it receives traffic ReplicationController "test1/web-server-rc" has been idled $ oc get all NAME DESIRED CURRENT READY AGE replicationcontroller/web-server-rc 0 0 0 10 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/service-secure ClusterIP 172.30.89.95 <none> 27443/TCP 10m service/service-unsecure ClusterIP 172.30.100.229 <none> 27017/TCP 10 NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD route.route.openshift.io/service-unsecure service-unsecure-test1.apps.aiyengar-oc47rc3-2502.qe.devcluster.openshift.com service-unsecure http none oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.7.0-rc.3 True False 25m Cluster version is 4.7.0-rc.3 oc adm upgrade --to=4.8.0-0.nightly-2021-02-22-111248 Updating to 4.8.0-0.nightly-2021-02-22-111248 oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.7.0-rc.3 True True 44s Working towards 4.8.0-0.nightly-2021-02-22-111248: 69 of 669 done (10% complete) ... oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.0-0.nightly-2021-02-22-111248 True False 6m5s Cluster version is 4.8.0-0.nightly-2021-02-22-111248 NAME DESIRED CURRENT READY AGE replicationcontroller/web-server-rc 0 0 0 148 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/service-secure ClusterIP 172.30.89.95 <none> 27443/TCP 148m service/service-unsecure ClusterIP 172.30.100.229 <none> 27017/TCP 148m NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD route.route.openshift.io/service-unsecure service-unsecure-test1.apps.aiyengar-oc47rc3-2502.qe.devcluster.openshift.com service-unsecure http None curl service-unsecure-test1.apps.aiyengar-oc47rc3-2502.qe.devcluster.openshift.com -I HTTP/1.1 200 OK server: nginx/1.18.0 date: Thu, 25 Feb 2021 10:10:31 GMT content-type: text/html content-length: 46 last-modified: Thu, 25 Feb 2021 10:10:30 GMT etag: "60377796-2e" accept-ranges: bytes set-cookie: e96c07fa08f2609cadf847f019750244=fec2778a4dd919b178dbceb7a28a464e; path=/; HttpOnly cache-control: private connection: close NAME READY STATUS RESTARTS AGE pod/web-server-rc-b8w9g 1/1 Running 0 40s NAME DESIRED CURRENT READY AGE replicationcontroller/web-server-rc 1 1 1 15 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/service-secure ClusterIP 172.30.89.95 <none> 27443/TCP 152m service/service-unsecure ClusterIP 172.30.100.229 <none> 27017/TCP 15 NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD route.route.openshift.io/service-unsecure service-unsecure-test1.apps.aiyengar-oc47rc3-2502.qe.devcluster.openshift.com service-unsecure http None ------
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438