Bug 1826446 - Admission Webhook Configuration names should be unique
Summary: Admission Webhook Configuration names should be unique
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: OLM
Version: 4.5
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.5.0
Assignee: Alexander Greene
QA Contact: yhui
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-04-21 17:31 UTC by Alexander Greene
Modified: 2020-07-13 17:30 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: CSV authors had the ability to specify a specific name for webhooks shipped with their opreators. Consequence: Namespaced operators would overwrite webhooks shipped by the same operators in other namespaces. Fix: Webhooks are now generated with unique names. Result: Each namespaced operator creates its own webhook with a unique name.
Clone Of:
Environment:
Last Closed: 2020-07-13 17:29:56 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github operator-framework operator-lifecycle-manager pull 1489 0 None closed Bug 1826446: (fix) Admission Webhook names must be unique 2020-11-09 09:13:29 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:30:10 UTC

Description Alexander Greene 2020-04-21 17:31:19 UTC
Description of problem:
If the same CSV is installed in different OperatorGroups, a single webhook will be created instead of the 2 expected webhooks.

Version-Release number of selected component (if applicable):
4.5 only.

How reproducible:
Always

Steps to Reproduce:
1. Install OpenShift 4.5
2. Install an operator that includes a validating webhook in one OperatorGroup
3. Install the same operator in a second OperatorGroup
4. Check existing validatingwebhookconfigurations.

Actual results:
Only 1 validatingwebhookconfiguration exists.


Expected results:
2 validatingwebhookconfigurations exist.


Additional info:

Comment 3 yhui 2020-05-08 06:36:31 UTC
Version-Release number of selected component (if applicable):
[root@preserve-olm-env bug-1826446]# oc version
Server Version: 4.5.0-0.nightly-2020-05-07-144853
Kubernetes Version: v1.18.0-rc.1
[root@preserve-olm-env bug-1826446]# oc exec catalog-operator-6c5576474-hb276 -n openshift-operator-lifecycle-manager -- olm --version
OLM version: 0.14.2
git commit: 6544650f2bff3d58b60af24e4eab2b9d4cb06b1b


Steps to test:
1. Create the first namespace and operatorgroup, create a operator on this namespace

[root@preserve-olm-env bug-1826446]# oc new-project wk-ns1
[root@preserve-olm-env bug-1826446]# cat og-ns1.yaml 
apiVersion: operators.coreos.com/v1alpha2
kind: OperatorGroup
metadata:
  name: wk-ns1-operators1
  namespace: wk-ns1
spec:
  targetNamespaces:
    - wk-ns1
[root@preserve-olm-env bug-1826446]# oc apply -f og-ns1.yaml
operatorgroup.operators.coreos.com/wk-ns1-operators1 created

[root@preserve-olm-env bug-1826446]# cat csv-ns1.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
  annotations:
    alm-examples: '[{"apiVersion":"serving.knative.dev/v1alpha1","kind":"KnativeServing","metadata":{"name":"knative-serving"},"spec":{"config":{"autoscaler":{"container-concurrency-target-default":"100","container-concurrency-target-percentage":"1.0","enable-scale-to-zero":"true","max-scale-up-rate":"10","panic-threshold-percentage":"200.0","panic-window":"6s","panic-window-percentage":"10.0","scale-to-zero-grace-period":"30s","stable-window":"60s","tick-interval":"2s"},"defaults":{"revision-cpu-limit":"1000m","revision-cpu-request":"400m","revision-memory-limit":"200M","revision-memory-request":"100M","revision-timeout-seconds":"300"},"deployment":{"registriesSkippingTagResolving":"ko.local,dev.local"},"gc":{"stale-revision-create-delay":"24h","stale-revision-lastpinned-debounce":"5h","stale-revision-minimum-generations":"1","stale-revision-timeout":"15h"},"logging":{"loglevel.activator":"info","loglevel.autoscaler":"info","loglevel.controller":"info","loglevel.queueproxy":"info","loglevel.webhook":"info"},"observability":{"logging.enable-var-log-collection":"false","metrics.backend-destination":"prometheus"},"tracing":{"enable":"false","sample-rate":"0.1"}}}}]'
    capabilities: Seamless Upgrades
    categories: Networking,Integration & Delivery,Cloud Provider,Developer Tools
    certified: "false"
    containerImage: quay.io/openshift-knative/serverless-operator:v1.0.0
...

[root@preserve-olm-env bug-1826446]# oc apply -f csv-ns1.yaml
clusterserviceversion.operators.coreos.com/webhook.v1.0.0 created


2. Verify the validatingwebhookconfiguration has been created.

[root@preserve-olm-env bug-1826446]# oc get validatingwebhookconfiguration
NAME                       WEBHOOKS   AGE
autoscaling.openshift.io   2          4h18m
multus.openshift.io        1          4h26m
object.auditor.com-jt77k   1          29s


3. Create the second namespace and operatorgroup, create the same operator on this namespace

[root@preserve-olm-env bug-1826446]# oc new-project wk-ns2
[root@preserve-olm-env bug-1826446]# cat og-ns2.yaml
apiVersion: operators.coreos.com/v1alpha2
kind: OperatorGroup
metadata:
  name: wk-ns2-operators1
  namespace: wk-ns2
spec:
  targetNamespaces:
    - wk-ns2
[root@preserve-olm-env bug-1826446]# oc apply -f og-ns2.yaml
operatorgroup.operators.coreos.com/wk-ns2-operators1 created
[root@preserve-olm-env bug-1826446]# oc apply -f csv-ns2.yaml
clusterserviceversion.operators.coreos.com/webhook.v1.0.0 created

4. Verify the second validatingwebhookconfiguration has been created.
[root@preserve-olm-env bug-1826446]# oc get validatingwebhookconfiguration
NAME                       WEBHOOKS   AGE
autoscaling.openshift.io   2          4h20m
multus.openshift.io        1          4h29m
object.auditor.com-jt77k   1          3m6s
object.auditor.com-m27sk   1          12s
[root@preserve-olm-env bug-1826446]# oc get validatingwebhookconfiguration object.auditor.com-jt77k -o yaml
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  creationTimestamp: "2020-05-08T06:00:08Z"
  generateName: object.auditor.com-
  generation: 1
  labels:
    olm.owner: webhook.v1.0.0
    olm.owner.kind: ClusterServiceVersion
    olm.owner.namespace: wk-ns1
    webhookDescriptionGenerateName: object.auditor.com
  managedFields:
  - apiVersion: admissionregistration.k8s.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:generateName: {}
        f:labels:
          .: {}
          f:olm.owner: {}
          f:olm.owner.kind: {}
          f:olm.owner.namespace: {}
          f:webhookDescriptionGenerateName: {}
      f:webhooks:
        .: {}
        k:{"name":"object.auditor.com"}:
          .: {}
          f:admissionReviewVersions: {}
          f:clientConfig:
            .: {}
            f:caBundle: {}
            f:service:
              .: {}
              f:name: {}
              f:namespace: {}
              f:path: {}
              f:port: {}
          f:failurePolicy: {}
          f:matchPolicy: {}
          f:name: {}
          f:namespaceSelector:
            .: {}
            f:matchLabels:
              .: {}
              f:olm.operatorgroup.uid/395556d5-2ccd-48c2-b9d4-b0e035588d6b: {}
          f:objectSelector: {}
          f:rules: {}
          f:sideEffects: {}
          f:timeoutSeconds: {}
    manager: olm
    operation: Update
    time: "2020-05-08T06:00:08Z"
  name: object.auditor.com-jt77k
  resourceVersion: "99131"
  selfLink: /apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/object.auditor.com-jt77k
  uid: 097c9c1f-e631-455f-b325-2e31152d1699
webhooks:
- admissionReviewVersions:
  - v1
  - v1beta1
  clientConfig:
    caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJhRENDQVE2Z0F3SUJBZ0lJVlFsVzIwQXhIWTh3Q2dZSUtvWkl6ajBFQXdJd0dERVdNQlFHQTFVRUNoTU4KVW1Wa0lFaGhkQ3dnU1c1akxqQWVGdzB5TURBMU1EZ3dOakF3TURoYUZ3MHlNakExTURnd05qQXdNRGhhTUJneApGakFVQmdOVkJBb1REVkpsWkNCSVlYUXNJRWx1WXk0d1dUQVRCZ2NxaGtqT1BRSUJCZ2dxaGtqT1BRTUJCd05DCkFBUWRITk9GM3hCcld2ZVNtYUtLbURmN3U3OC92YnRCRWxlblh1aXRkNGNjTWs2eitJR0Y4dEJqWWNPUzVUL2QKU1p4aVZ6WDVWM1BPM2duL09VL3d0aklMbzBJd1FEQU9CZ05WSFE4QkFmOEVCQU1DQW9Rd0hRWURWUjBsQkJZdwpGQVlJS3dZQkJRVUhBd0lHQ0NzR0FRVUZCd01CTUE4R0ExVWRFd0VCL3dRRk1BTUJBZjh3Q2dZSUtvWkl6ajBFCkF3SURTQUF3UlFJaEFQSERsNklIVVBwb1RBM0FzUHpkSUdaWm5NcHJWVU9pMWJDbzcrUnNvZldnQWlCYnA5QkUKTE9DK2l5Nk1FRW03a0JGY0hFSjZ5dURDR3ZUQzAwTjRDRTlHL2c9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    service:
      name: object-auditor-webhook-deployment-service
      namespace: wk-ns1
      path: /mutate
      port: 443
  failurePolicy: Ignore
  matchPolicy: Equivalent
  name: object.auditor.com
  namespaceSelector:
    matchLabels:
      olm.operatorgroup.uid/395556d5-2ccd-48c2-b9d4-b0e035588d6b: ""
  objectSelector: {}
  rules:
  - apiGroups:
    - ""
    apiVersions:
    - v1
    operations:
    - CREATE
    resources:
    - configmaps
    scope: '*'
  sideEffects: None
  timeoutSeconds: 10


Results:
2 validatingwebhookconfigurations exist. Verify the bug.

Comment 4 errata-xmlrpc 2020-07-13 17:29:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.