Description of problem: For node debug terminals opened via web console, pods and the corresponding projects do not get deleted # oc get pods -A |grep debug openshift-debug-node-wrcww ip-10-130-82-35.us-west-2.compute.internal-debug 0/1 Comple Version-Release number of selected component (if applicable): 4.7.40 How reproducible: For affected customer, exact sequence of reproducing is not known as multiple teams use the cluster. I can still reproduce by opening debug terminal for the node and then closing the browser tab without navigating to any other page even on 4.10. However exact reproducing steps for customer is unknown and can be something other than this too Actual results: pod/project do not get deleted Expected results: pod/project should be deleted Additional info: Similar problem was reported and fixed in https://bugzilla.redhat.com/show_bug.cgi?id=1947430 , as per the Pull request, seems changes were done to oc cli. What about web console? The errata for that bug was released for 4.7.8
Thanks for reporting the issue. Given its severity of `low` and the fact that 4.7 is in maintenance support, I will ask you if you could reproduce this issue on supported clusters - 4.9.+ Otherwise I am closing this bug as "During the Maintenance Support phase, qualified Critical and Important Security Advisories (RHSAs) and Urgent and Selected High Priority Bug Fix Advisories (RHBAs) may be released as they become available. Other Bug Fix (and Enhancement (RHEA) Advisories may be released at Red Hat’s discretion, but should not be expected." per https://access.redhat.com/support/policy/updates/openshift.
Reproducible on a 4.10 : # oc get pods -n openshift-debug-node-z8fxv NAME READY STATUS RESTARTS AGE ip-10-0-140-89.ap-south-1.compute.internal-debug 0/1 Completed 0 22h For reproducing above, a terminal session was launched from the console and then the browser tab was closed without navigating anywhere else.
Hi zherman could you help check if the below reproduce/verification step is workable in your environment? I failed to verify this bug in payload 4.11.0-0.nightly-2022-06-15-222801 (attached snapshot for reference) Verification Step: 1. Create 2-5 Pods with error 'Crashloopback-off' error (sample as below) kind: Pod apiVersion: v1 metadata: name: crash-pod labels: app: test deploymentconfig: nodejs-ex-git spec: containers: - name: crash-app image: quay.io/openshifttest/crashpod restartPolicy: Always 2. Do not close the Pod page, Open the debug container on another Tab of the Browser, and wait for a while 3. Go back to the Workloads - Pods page, close the tab 4. Verify if the debug container is being deleted Result: 4. The debug container is still can be found on the Pod list page with the Running state
Hi @gvudinh The mentioned issue on comment7 is not entirely fixed, the issue still can be reproduced on payload 4.12.0-0.nightly-2022-07-20-030220
OpenShift has moved to Jira for its defect tracking! This bug can now be found in the OCPBUGS project in Jira. https://issues.redhat.com/browse/OCPBUGS-9286