Hi Almen, I checked the cluster you pinged me and I saw the issue but the cluster is now no longer accessible. I tried to reproduce it myself using a bad Helm app subscription then change to a good Helm app subscription but it seems ok. apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: dev-helmrepo namespace: dev annotations: apps.open-cluster-management.io/reconcile-rate: high spec: type: HelmRepo pathname: https://charts.helm.sh/stable/ insecureSkipVerify: true apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: nginx-pr spec: clusterReplicas: 10 apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: ingress spec: channel: default/dev-helmrepo name: ingress placement: placementRef: kind: PlacementRule name: nginx-pr packageFilter: version: "0.2.0" this will create a fail helm deploy with appsubstatus statuses: packages: - apiVersion: apps.open-cluster-management.io/v1 kind: HelmRelease lastUpdateTime: "2022-05-09T16:17:04Z" message: 'InstallError unable to build kubernetes objects from release manifest: unable to recognize "": no matches for kind "Deployment1" in version "apps/v1"' name: ingress-8f723 namespace: default phase: Failed then change the subscription version from 0.2.0 to 0.1.0 and wait a while and the appsubstatus seems fine statuses: packages: - apiVersion: v1 kind: ServiceAccount lastUpdateTime: "2022-05-09T16:18:04Z" name: ingress-8f723 namespace: default phase: Deployed - apiVersion: v1 kind: Service lastUpdateTime: "2022-05-09T16:18:04Z" name: ingress-8f723 namespace: default phase: Deployed - apiVersion: apps/v1 kind: Deployment lastUpdateTime: "2022-05-09T16:18:04Z" name: ingress-8f723 namespace: default phase: Deployed - apiVersion: apps.open-cluster-management.io/v1 kind: HelmRelease lastUpdateTime: "2022-05-09T16:18:04Z" message: 0.1.0 name: ingress-8f723 namespace: default phase: Deployed Could you provide the Helm app examples you used in your testing? Thanks.
Almen, is it possible you didn't wait long enough for the reconcile period? In my channel above I used the annotation annotations: apps.open-cluster-management.io/reconcile-rate: high and after I wait 2 minute ish I was able to see the HelmRelease CR is updated to pull from the "good" chart. You can verify by checking the HelmRelease CR to make sure it's actually downloading from the good chart after you updated your appsub to pull from the good Helm chart. It might take a while if you don't use high reconcile-rate.
As discussed with Almen, I was able to reproduce it by going from https://github.com/stolostron/application-lifecycle-samples/tree/main/mortgage-fail-helm to https://github.com/stolostron/application-lifecycle-samples/tree/main/helloworld-helm
Verified bugfix on quay.io:443/acm-d/acm-custom-registry:2.5.0-DOWNSTREAM-2022-05-10-21-30-52 Switched from mortgage-helm-fail to helloworld-helm for https://raw.githubusercontent.com/stolostron/application-lifecycle-samples/main All the resources were deployed and there was no error
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Advanced Cluster Management 2.5 security updates, images, and bug fixes), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:4956
*** Bug 2101934 has been marked as a duplicate of this bug. ***