Bug 1990089
Summary: | Bundle validation does not fail for a bundle having multiple service account declaration with same name | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Mansi Kulkarni <mankulka> |
Component: | Operator SDK | Assignee: | Jesus M. Rodriguez <jesusr> |
Status: | CLOSED ERRATA | QA Contact: | Cuiping HUO <chuo> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 4.9 | CC: | aos-bugs, parichar, tflannag |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | 4.9.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2021-10-18 17:44:55 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1992405 | ||
Bug Blocks: |
Description
Mansi Kulkarni
2021-08-04 17:54:12 UTC
*** Bug 1989689 has been marked as a duplicate of this bug. *** If validation of the bundle should fail then would this not mean that the operator-sdk layout of resources becomes problematic? The operator-sdk layout usually has a service-account.yaml for the operator located in either manager or rbac. Unless a specific line is added to the Makefile when running `make bundle` to exclude this file, it would get included. Is this now the preferred methodology? Yeah, we'll have to update the operator-sdk tooling to avoid generating this bundle during the `operator-sdk generate bundle` command: https://github.com/operator-framework/operator-sdk/pull/5120/. (In reply to tflannag from comment #4) > Yeah, we'll have to update the operator-sdk tooling to avoid generating this > bundle during the `operator-sdk generate bundle` command: > https://github.com/operator-framework/operator-sdk/pull/5120/. Will it not still break for existing projects using older operator-sdk versions? We've worked around the issue (by editing the Makefile) in syndesis/fuse-online & camel-k as we are only up to operator-sdk 1.5.0 and 1.4.0 respectively. Reassigning this bz to the operator-sdk, where the validation code is actually called. This was fixed in operator-sdk v1.11.0 with the PR mentioned in comment #4. (In reply to phantomjinx from comment #5) > (In reply to tflannag from comment #4) > > Yeah, we'll have to update the operator-sdk tooling to avoid generating this > > bundle during the `operator-sdk generate bundle` command: > > https://github.com/operator-framework/operator-sdk/pull/5120/. > > Will it not still break for existing projects using older operator-sdk > versions? It will likely still happen in older releases. We usually don't backport bugs that far back. > We've worked around the issue (by editing the Makefile) in > syndesis/fuse-online & camel-k as we are only up to operator-sdk 1.5.0 and > 1.4.0 respectively. The work around you mentioned by editing the Makefile is the proper workaround for this issue. Also fixed in v1.10.1, v1.9.2 & v1.8.2 upstream. Fixed by v1.10.1-ocp Verified with downstream v1.10.1-ocp. No "controller-manager_v1_serviceaccount.yaml" showed. operator-sdk version: "v1.10.1-ocp", commit: "73679fc2fa2b1f8f67c57105d7cc5f16013946c6", kubernetes version: "v1.21", go version: "go1.16.5", GOOS: "linux", GOARCH: "amd64" $ operator-sdk init --plugins ansible.sdk.operatorframework.io/v1 --domain example.com --group cache --version v1alpha1 --kind Memcached --generate-playbook Writing kustomize manifests for you to edit... Creating the API: $ operator-sdk create api --group cache --version v1alpha1 --kind Memcached --generate-playbook Writing kustomize manifests for you to edit... $ operator-sdk generate bundle --deploy-dir=config --crds-dir=config/crds --version=0.0.1 Generating bundle version 0.0.1 Generating bundle manifests Building a ClusterServiceVersion without an existing base Bundle manifests generated successfully in bundle Generating bundle metadata INFO[0000] Creating bundle.Dockerfile INFO[0000] Creating bundle/metadata/annotations.yaml INFO[0000] Bundle metadata generated suceessfully $ tree bundle bundle ├── manifests │ ├── build.clusterserviceversion.yaml │ ├── cache.example.com_memcacheds.yaml │ ├── controller-manager-metrics-monitor_monitoring.coreos.com_v1_servicemonitor.yaml │ ├── controller-manager-metrics-service_v1_service.yaml │ ├── memcached-editor-role_rbac.authorization.k8s.io_v1_clusterrole.yaml │ ├── memcached-viewer-role_rbac.authorization.k8s.io_v1_clusterrole.yaml │ └── metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml ├── metadata │ └── annotations.yaml └── tests └── scorecard └── config.yaml 4 directories, 9 files Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.9.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:3759 |