Bug 1942395 - The status is always "Updating" on dc detail page after deployment has failed.
Summary: The status is always "Updating" on dc detail page after deployment has failed.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Management Console
Version: 4.8
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.8.0
Assignee: Jakub Hadvig
QA Contact: Yanping Zhang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-03-24 10:18 UTC by Yanping Zhang
Modified: 2021-07-27 22:55 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-07-27 22:55:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
dc-detail-page (77.05 KB, image/png)
2021-03-24 10:18 UTC, Yanping Zhang
no flags Details
rc-status (35.90 KB, image/png)
2021-03-24 10:23 UTC, Yanping Zhang
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github openshift console pull 8463 0 None open Bug 1942395: Display Failed status for DeploymentConfig 2021-03-25 16:27:20 UTC
Red Hat Product Errata RHSA-2021:2438 0 None None None 2021-07-27 22:55:34 UTC

Description Yanping Zhang 2021-03-24 10:18:22 UTC
Created attachment 1765885 [details]
dc-detail-page

Description of problem:
The "Status" field is always "Updating" on dc detail page after deployment has failed 

Version-Release number of selected component (if applicable):
4.8.0-0.nightly-2021-03-22-104536

How reproducible:
Always

Steps to Reproduce:
1.Create a dc which will fail to deployment. Example yaml:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
  name: faildc
spec:
  strategy:
    type: Rolling
    rollingParams:
      updatePeriodSeconds: 1
      intervalSeconds: 1
      timeoutSeconds: 20
      maxUnavailable: 25%
      maxSurge: 25%
    resources: {}
    activeDeadlineSeconds: 30
  triggers:
    - type: ConfigChange
  selector:
    app: hello-openshiftfail
  replicas: 1
  template:
    metadata:
      labels:
        app: hello-openshiftfail
    spec:
      containers:
        - name: hello-openshift
          image: openshift/hello-openshiftdc
          ports:
            - containerPort: 8080
2.Wait after the deployment failed, the rc satus was "Failed"
3.Check on the dc detail page.

Actual results:
3.The "Status" field was always "Updating" for hours.

Expected results:
3.Use would feel confused about what is updating all the time. It's better to show correct status, eg "Up to date", "Failed". 

Additional info:
I notice the status when I check https://issues.redhat.com/browse/ODC-5594, in which I need deal with dc in "Active" and "Not active" status, but I can not guess the exact meaning of "Updating" status on dc detail page.

Comment 1 Yanping Zhang 2021-03-24 10:23:01 UTC
Created attachment 1765886 [details]
rc-status

Comment 3 Yanping Zhang 2021-03-30 03:16:43 UTC
The fix are not contained in payload 4.8.0-0.nightly-2021-03-26-054333, wait for new payload containing the fix.

Comment 4 Yanping Zhang 2021-03-31 05:46:54 UTC
Checked on ocp 4.8 cluster with payload 4.8.0-0.nightly-2021-03-30-160509. Create dc with wrong image, the deployment will fail, check "Status" on dc detail page, it shows "Failed". The bug is fixed.

Comment 7 errata-xmlrpc 2021-07-27 22:55:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438


Note You need to log in before you can comment on or make changes to this bug.