Description of problem: Consider a scenario where: A bundle is created with multiple service account declaration of the same name using a service_account.yaml file and a resource spec serviceAccountName in the CSV. The operator fails the upgrade process with the requirementsNotMet error: "Service account is owned by another ClusterServiceVersion" Requirement status: Requirement Status: Group: operators.coreos.com Kind: ClusterServiceVersion Message: CSV minKubeVersion (1.21.0) less than server version (v1.21.1+f36aa36) Name: windows-machine-config-operator.v3.0.0 Status: Present Version: v1alpha1 Group: Kind: ServiceAccount Message: Service account is owned by another ClusterServiceVersion Name: windows-machine-config-operator Status: PresentNotSatisfied Version: v1 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal RequirementsUnknown 33m (x2 over 33m) operator-lifecycle-manager requirements not yet checked Normal RequirementsNotMet 33m (x2 over 33m) operator-lifecycle-manager one or more requirements couldn't be found Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Same as the description. Actual results: Bundle creation/validation does not fail on a bundle re-declaring service account with same name. Expected results: Bundle creation/validation should fail on a bundle re-declaring service account with same name. Additional info:
*** Bug 1989689 has been marked as a duplicate of this bug. ***
If validation of the bundle should fail then would this not mean that the operator-sdk layout of resources becomes problematic? The operator-sdk layout usually has a service-account.yaml for the operator located in either manager or rbac. Unless a specific line is added to the Makefile when running `make bundle` to exclude this file, it would get included. Is this now the preferred methodology?
Yeah, we'll have to update the operator-sdk tooling to avoid generating this bundle during the `operator-sdk generate bundle` command: https://github.com/operator-framework/operator-sdk/pull/5120/.
(In reply to tflannag from comment #4) > Yeah, we'll have to update the operator-sdk tooling to avoid generating this > bundle during the `operator-sdk generate bundle` command: > https://github.com/operator-framework/operator-sdk/pull/5120/. Will it not still break for existing projects using older operator-sdk versions? We've worked around the issue (by editing the Makefile) in syndesis/fuse-online & camel-k as we are only up to operator-sdk 1.5.0 and 1.4.0 respectively.
Reassigning this bz to the operator-sdk, where the validation code is actually called.
This was fixed in operator-sdk v1.11.0 with the PR mentioned in comment #4.
(In reply to phantomjinx from comment #5) > (In reply to tflannag from comment #4) > > Yeah, we'll have to update the operator-sdk tooling to avoid generating this > > bundle during the `operator-sdk generate bundle` command: > > https://github.com/operator-framework/operator-sdk/pull/5120/. > > Will it not still break for existing projects using older operator-sdk > versions? It will likely still happen in older releases. We usually don't backport bugs that far back. > We've worked around the issue (by editing the Makefile) in > syndesis/fuse-online & camel-k as we are only up to operator-sdk 1.5.0 and > 1.4.0 respectively. The work around you mentioned by editing the Makefile is the proper workaround for this issue.
Also fixed in v1.10.1, v1.9.2 & v1.8.2 upstream.
Fixed by v1.10.1-ocp
Verified with downstream v1.10.1-ocp. No "controller-manager_v1_serviceaccount.yaml" showed. operator-sdk version: "v1.10.1-ocp", commit: "73679fc2fa2b1f8f67c57105d7cc5f16013946c6", kubernetes version: "v1.21", go version: "go1.16.5", GOOS: "linux", GOARCH: "amd64" $ operator-sdk init --plugins ansible.sdk.operatorframework.io/v1 --domain example.com --group cache --version v1alpha1 --kind Memcached --generate-playbook Writing kustomize manifests for you to edit... Creating the API: $ operator-sdk create api --group cache --version v1alpha1 --kind Memcached --generate-playbook Writing kustomize manifests for you to edit... $ operator-sdk generate bundle --deploy-dir=config --crds-dir=config/crds --version=0.0.1 Generating bundle version 0.0.1 Generating bundle manifests Building a ClusterServiceVersion without an existing base Bundle manifests generated successfully in bundle Generating bundle metadata INFO[0000] Creating bundle.Dockerfile INFO[0000] Creating bundle/metadata/annotations.yaml INFO[0000] Bundle metadata generated suceessfully $ tree bundle bundle ├── manifests │ ├── build.clusterserviceversion.yaml │ ├── cache.example.com_memcacheds.yaml │ ├── controller-manager-metrics-monitor_monitoring.coreos.com_v1_servicemonitor.yaml │ ├── controller-manager-metrics-service_v1_service.yaml │ ├── memcached-editor-role_rbac.authorization.k8s.io_v1_clusterrole.yaml │ ├── memcached-viewer-role_rbac.authorization.k8s.io_v1_clusterrole.yaml │ └── metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml ├── metadata │ └── annotations.yaml └── tests └── scorecard └── config.yaml 4 directories, 9 files
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:3759