Description of problem (please be detailed as possible and provide log snippests): Update graphDataImage of updateservice.spec will not get updateservice pod redeployed. # ./oc get updateservice -ojson|jq .items[].metadata.generation 2 # ./oc get updateservice -ojson|jq .items[].spec { "foo": "bar", "graphDataImage": "jliu-46.mirror-registry.qe.gcp.devcluster.openshift.com:5000/rh-osbs/cincinnati-graph-data-container:1.1", "releases": "jliu-46.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp-release", "replicas": 1 } # ./oc get deployment test -oyaml|grep image:|grep graph - image: jliu-46.mirror-registry.qe.gcp.devcluster.openshift.com:5000/rh-osbs/cincinnati-graph-data-container:1.0 # ./oc get po test-65df76bf76-x79t7 -oyaml|grep image:|grep graph - image: jliu-46.mirror-registry.qe.gcp.devcluster.openshift.com:5000/rh-osbs/cincinnati-graph-data-container:1.0 image: jliu-46.mirror-registry.qe.gcp.devcluster.openshift.com:5000/rh-osbs/cincinnati-graph-data-container:1.0 Version of all relevant components (if applicable): OSUS operator image: v4.6.0-6 Bundle image: v1.0-21 Operand image: v4.6.0-8 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Is there any workaround available to the best of your knowledge? Delete the updateservice instance and re-create a new instance Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? always Can this issue reproduce from the UI? yes If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. Create updateservice instance with graph-data image v1.0 ... spec: foo: bar graphDataImage: >- jliu-46.mirror-registry.qe.gcp.devcluster.openshift.com:5000/rh-osbs/cincinnati-graph-data-container:1.0 releases: 'jliu-46.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp-release' replicas: 1 ... 2. Push graph-data image v1.1 to local registry for updating channel.yaml 3. Update updateservice resource from web-console or cli to point graphDataImage to v1.1 Actual results: The updateservice pod will not be re-deployed. Expected results: The new osus pod should be re-created with the new image automatically. Additional info: It also happened in following scenario that correcting the old wrong repo/image specified in updateservice.
@liujia With compared to our dev preview release is this a regression?
(In reply to Lalatendu Mohanty from comment #1) > @liujia With compared to our dev preview release is this a regression? I'm not sure the result of our dev preview release. It's tested by ACM team.
Due to previous bundle image build issues, qe's test was blocked a lot. Now it's fixed well in v1.0.21, so our test against operator and operand start actually in recent days. We encountered several issues during past two days, even some of issues are by accident. So I added "needtestcases" keywords too to remind qe for adding cases later. I don't think all the issues are new in this GA version and maybe also in tech preview version. But since previous version is for tech preview and out of OTA team's control, so QE don't have an extra test against previous version. This is the 1st version of osus as a product, QE will focus on current version more to help finding/tracking existing issues before release. But i don't think all of bugs should be fixed in the 1st released version.
This is not blocker for the first release and we need to add this bug to known issues. Hence pushing this to 4.8. Also restarting OSUS will not cause any impact on the availability of the OpenShift. I am changing the sev to medium.
@jiajliu Are you saying without the restart of OSUS pod the new information in the graph-data image is not getting reflected in the graph?
Yes, the update of graph-data image in updateservice resource will not get osus deployment/pod updated, and no restart of pod either. So it still use the old graph-data image which does not include latest node/edge. The workaround is to delete the old updateservice instance, and re-create a new one with the new graph-data image.
*** Bug 2009651 has been marked as a duplicate of this bug. ***
*** Bug 2030619 has been marked as a duplicate of this bug. ***
Moving back to NEW now that the issue that #133 fixed has been moved over to bug 2009651.
(In reply to Himanshu from comment #15) > @jack.ottofaro Seeking update on this. This is in work. Adding in behavior such that updates to graphDataImage are handled the same as updates to other currently supported attributes was straightforward. The issue is that none of the updates appear to be getting handled correctly. Rather than the existing operand pod simply being restarted updates result in an additional operand being created which is not correct.
(In reply to Himanshu from comment #17) Nothing required from the CU but I do have an update. Believe the general update issue I mentioned is not a real issue but a result of faulty testing. In my test env my replicaset was never getting to the ready state so, since deployment strategy is RollingUpdate, it creates a new replicaset but never deletes the old one since the new one also never gets to the ready state. Bottom line, I have a fix for the original issue described in the bug I just need to do some more testing.
Verifying on cincinnati-container-v5.0.1-3, cincinnati-operator-container-v5.0.1-3 and cincinnati-operator-bundle-container-v5.0.1-1 1. Install OSUS using quay.io/openshifttest/graph-data:5.0.1 as graph data image # oc get updateservice -ojson|jq .items[].spec { "graphDataImage": "quay.io/openshifttest/graph-data:5.0.1", "releases": "quay.io/openshifttest/ocp4/openshift4-release-images", "replicas": 1 } # oc get pod NAME READY STATUS RESTARTS AGE sample-856879d565-vrmrl 2/2 Running 0 27m updateservice-operator-85758c57bb-977v5 1/1 Running 0 33m 2. Update the graph data image to 5.0.1-1 # oc edit updateservice Vim: Warning: Output is not to a terminal updateservice.updateservice.operator.openshift.io/sample edited # oc get updateservice -ojson|jq .items[].spec { "graphDataImage": "quay.io/openshifttest/graph-data:5.0.1-1", "releases": "quay.io/openshifttest/ocp4/openshift4-release-images", "replicas": 1 } # oc get pod NAME READY STATUS RESTARTS AGE sample-547b879c59-5r4zp 2/2 Running 0 28s updateservice-operator-85758c57bb-977v5 1/1 Running 0 37m # oc get pod sample-547b879c59-5r4zp -ojson | jq .status.initContainerStatuses [ { "containerID": "cri-o://63833255bd03193aafb37918923fc9fde253cf3257b0d7d3c8968d34caa5fa66", "image": "quay.io/openshifttest/graph-data:5.0.1-1", "imageID": "quay.io/openshifttest/graph-data@sha256:8c17430eddfee52c9f64d9be0b18b46d2b3078038887925a0d2110df3f01a917", "lastState": {}, "name": "graph-data", "ready": true, "restartCount": 0, "state": { "terminated": { "containerID": "cri-o://63833255bd03193aafb37918923fc9fde253cf3257b0d7d3c8968d34caa5fa66", "exitCode": 0, "finishedAt": "2023-02-06T07:40:40Z", "reason": "Completed", "startedAt": "2023-02-06T07:40:40Z" } } } ] UpdateService pod is updated with new graph data image. It looks good to me. Moving it to verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (RHEA: OSUS Enhancement Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2023:1161