Bug 2104897 - Failed to create Fileintegrity object in a namespace without openshift prefix
Summary: Failed to create Fileintegrity object in a namespace without openshift prefix
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: File Integrity Operator
Version: 4.11
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.12.0
Assignee: Matt Rogers
QA Contact: xiyuan
Jeana Routh
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-07-07 12:24 UTC by xiyuan
Modified: 2022-12-22 21:46 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
* Previously, the File Integrity Operator deployed templates using the `openshift-file-integrity` namespace in the permissions for the Operator. When the Operator attempted to create objects in the namespace, to would fail due to permission issues. With this release, the deployment resources used by OLM are updated to use the correct namespace, fixing the permission issues so that users can install and use the operator in a non-default namespaces. (link:https://bugzilla.redhat.com/show_bug.cgi?id=2104897[*BZ#2104897*])
Clone Of:
Environment:
Last Closed: 2022-08-02 08:17:03 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift file-integrity-operator pull 259 0 None open Bug 2104897: Use ClusterRole and local RoleBinding for daemon permissions 2022-07-08 15:31:04 UTC
Red Hat Product Errata RHBA-2022:5538 0 None None None 2022-08-02 08:17:09 UTC

Description xiyuan 2022-07-07 12:24:38 UTC
Version of components:
FIOv0.1.26-2

Description of the problem:
When trying to install FIO in a namespace without openshift prefix
(a namespace different from openshift-file-integrity), the FIO installation succeeded. However, creating fileintegrity will fail:
$ oc describe daemonset aide-example-fileintegrity | tail
    Medium:     
    SizeLimit:  <unset>
   config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      example-fileintegrity
    Optional:  false
Events:
  Type     Reason        Age                   From                  Message
  ----     ------        ----                  ----                  -------
  Warning  FailedCreate  79s (x17 over 4m47s)  daemonset-controller  Error creating: pods "aide-example-fileintegrity-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.containers[0].securityContext.runAsUser: Invalid value: 0: must be in the ranges: [1001150000, 1001159999], spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "node-exporter": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount]

Steps to reproduce the issue:
Intall FIO with below command:
$ oc create -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
  labels:
    openshift.io/cluster-monitoring: "true"
    pod-security.kubernetes.io/enforce: privileged
    pod-security.kubernetes.io/audit: privileged
    pod-security.kubernetes.io/warn: privileged
  name: fio
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
   name: openshift-file-integrity-qbcd
   namespace: fio
spec:
   targetNamespaces:
   - fio
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
   name: file-integrity-operator
   namespace: fio
spec:
   channel: "release-0.1"
   installPlanApproval: Automatic
   name: file-integrity-operator
   source: qe-app-registry
   sourceNamespace: openshift-marketplace
EOF

Create fileinegrity when installation done:
$ oc apply -f -<<EOF
apiVersion: fileintegrity.openshift.io/v1alpha1
kind: FileIntegrity
metadata:
  name: example-fileintegrity
  namespace: fio
spec:
  config:
    gracePeriod: 20
    maxBackups: 5
  debug: true

EOF

Actual result:
The pod for fileintegrity failed to create:
$ oc describe daemonset aide-example-fileintegrity | tail
    Medium:     
    SizeLimit:  <unset>
   config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      example-fileintegrity
    Optional:  false
Events:
  Type     Reason        Age                   From                  Message
  ----     ------        ----                  ----                  -------
  Warning  FailedCreate  79s (x17 over 4m47s)  daemonset-controller  Error creating: pods "aide-example-fileintegrity-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.containers[0].securityContext.runAsUser: Invalid value: 0: must be in the ranges: [1001150000, 1001159999], spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "node-exporter": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount]

Expected result:
The pods for fileintegrity could be created successfully.

Additional info:
There is no such issue when trying to install FIO in the recommended namespace openshift-file-integrity

Comment 1 xiyuan 2022-07-07 12:49:50 UTC
Not sure whether related with below info:
$ oc describe roles file-integrity-daemon
Name:         file-integrity-daemon
Labels:       <none>
Annotations:  <none>
PolicyRule:
  Resources                                         Non-Resource URLs  Resource Names  Verbs
  ---------                                         -----------------  --------------  -----
  events.events.k8s.io                              []                 []              [create update]
  configmaps                                        []                 []              [create]
  events                                            []                 []              [create]
  fileintegrities.fileintegrity.openshift.io        []                 []              [get watch]
  securitycontextconstraints.security.openshift.io  []                 [privileged]    [use]

$ oc get rolebinding file-integrity-daemon -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  creationTimestamp: "2022-07-07T12:25:27Z"
  name: file-integrity-daemon
  namespace: fio
  resourceVersion: "252748"
  uid: d9bbfa74-999e-4e9d-8f05-8d9583f7b374
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: file-integrity-daemon
subjects:
- kind: ServiceAccount
  name: file-integrity-daemon
  namespace: openshift-file-integrity
$ oc get sa
NAME                      SECRETS   AGE
builder                   2         22m
default                   2         22m
deployer                  2         22m
file-integrity-daemon     2         22m
file-integrity-operator   2         22m

Comment 2 Matt Rogers 2022-07-07 17:44:39 UTC
I've been able to reproduce this upstream (using a catalog-deploy set to a different namespace). I got the same incorrect rolebinding:

$ oc get rolebindings/file-integrity-daemon -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  creationTimestamp: "2022-07-07T17:20:29Z"
  name: file-integrity-daemon
  namespace: fio
  resourceVersion: "63899"
  uid: 83bc94f8-046d-4ee9-a19a-5ba567becff3
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: file-integrity-daemon
subjects:
- kind: ServiceAccount
  name: file-integrity-daemon
  namespace: openshift-file-integrity

I'm working on a fix. Setting blocker- because this doesn't impact a released version yet, let's hold off on 0.1.27 and rebuild with 0.1.28 after I merge a fix.

Comment 3 Jian Zhang 2022-07-08 04:03:02 UTC
> I got the same incorrect rolebinding:

Yes, a hard code SA namespace here: https://github.com/openshift/file-integrity-operator/blob/master/bundle/manifests/file-integrity-daemon_rbac.authorization.k8s.io_v1_rolebinding.yaml#L13

Comment 4 xiyuan 2022-07-14 07:24:42 UTC
Verification pass. Details seen from below:
$ git log | grep 2104897 -5
Author: openshift-ci[bot] <75433959+openshift-ci[bot]@users.noreply.github.com>
Date:   Mon Jul 11 19:33:05 2022 +0000

    Merge pull request #259 from mrogers950/catalog_perm_ns
    
    Bug 2104897: Use ClusterRole and local RoleBinding for daemon permissions

commit 10f546c1eed34848e6c64b995c924502808591ec
Author: Matt Rogers <mrogers>
Date:   Thu Jul 7 16:02:07 2022 -0400

$ export IMAGE_REPO=quay.io/xiyuan
$ export TAG=0714
$ export CATALOG_DEPLOY_NS=fio
$ make images && make push
...
STEP 15/15: COPY bundle/tests/scorecard /tests/scorecard/
COMMIT quay.io/xiyuan/file-integrity-operator-bundle:0714
--> b3874968c0a
Successfully tagged quay.io/xiyuan/file-integrity-operator-bundle:0714
b3874968c0a656f371f78536f26497616bfca356532d3b7478e6f7e43aa326a9
make: go: Permission denied
podman push quay.io/xiyuan/file-integrity-operator:0714
Getting image source signatures
Copying blob 435f8fa415c0 done  
Copying blob d673b9f65701 done  
Copying blob 5bb1bc89cb56 done  
Copying blob 8477ab008045 done  
Copying blob b5a1f74db970 done  
Copying config 5e35c6f160 done  
Writing manifest to image destination
Storing signatures
make image-push IMG=quay.io/xiyuan/file-integrity-operator-bundle:0714
make[1]: Entering directory '/home/xiyuan/securityandcompliance/file-integrity-operator'
make[1]: go: Permission denied
podman push quay.io/xiyuan/file-integrity-operator-bundle:0714
Getting image source signatures
Copying blob 992dce2bcc53 done  
Copying blob 88ae328ef4e1 done  
Copying blob b95e25b2b39f done  
Copying config b3874968c0 done  
Writing manifest to image destination
Storing signatures
make[1]: Leaving directory '/home/xiyuan/securityandcompliance/file-integrity-operator'

$ make catalog && make catalog-deploy
make: go: Permission denied
/usr/local/bin/opm index add --container-tool podman --mode semver --tag quay.io/xiyuan/file-integrity-operator-catalog:0714 --bundles quay.io/xiyuan/file-integrity-operator-bundle:0714 
WARN[0000] DEPRECATION NOTICE:
Sqlite-based catalogs and their related subcommands are deprecated. Support for
them will be removed in a future release. Please migrate your catalog workflows
to the new file-based catalog format. 
INFO[0000] building the index                            bundles="[quay.io/xiyuan/file-integrity-operator-bundle:0714]"
INFO[0000] running /usr/bin/podman pull quay.io/xiyuan/file-integrity-operator-bundle:0714  bundles="[quay.io/xiyuan/file-integrity-operator-bundle:0714]"
INFO[0005] running podman create                         bundles="[quay.io/xiyuan/file-integrity-operator-bundle:0714]"
INFO[0005] running podman cp                             bundles="[quay.io/xiyuan/file-integrity-operator-bundle:0714]"
INFO[0006] running podman rm                             bundles="[quay.io/xiyuan/file-integrity-operator-bundle:0714]"
INFO[0006] Could not find optional dependencies file     file=bundle_tmp2147250394/metadata load=annotations with=./bundle_tmp2147250394
INFO[0006] Could not find optional properties file       file=bundle_tmp2147250394/metadata load=annotations with=./bundle_tmp2147250394
INFO[0006] Could not find optional dependencies file     file=bundle_tmp2147250394/metadata load=annotations with=./bundle_tmp2147250394
INFO[0006] Could not find optional properties file       file=bundle_tmp2147250394/metadata load=annotations with=./bundle_tmp2147250394
INFO[0006] Generating dockerfile                         bundles="[quay.io/xiyuan/file-integrity-operator-bundle:0714]"
INFO[0006] writing dockerfile: ./index.Dockerfile167816429  bundles="[quay.io/xiyuan/file-integrity-operator-bundle:0714]"
INFO[0006] running podman build                          bundles="[quay.io/xiyuan/file-integrity-operator-bundle:0714]"
INFO[0006] [podman build --format docker -f ./index.Dockerfile167816429 -t quay.io/xiyuan/file-integrity-operator-catalog:0714 .]  bundles="[quay.io/xiyuan/file-integrity-operator-bundle:0714]"
make image-push IMG=quay.io/xiyuan/file-integrity-operator-catalog:0714
make[1]: Entering directory '/home/xiyuan/securityandcompliance/file-integrity-operator'
make[1]: go: Permission denied
podman push quay.io/xiyuan/file-integrity-operator-catalog:0714
Getting image source signatures
Copying blob d7452e510f70 done  
Copying blob 7d883a3eb1c5 skipped: already exists  
Copying blob 7d07f785a1c7 skipped: already exists  
Copying blob f88aedf6d45d skipped: already exists  
Copying blob 22f9fcd581f6 skipped: already exists  
Copying blob 49b687ced7f0 skipped: already exists  
Copying config 5775b92f35 done  
Writing manifest to image destination
Storing signatures
make[1]: Leaving directory '/home/xiyuan/securityandcompliance/file-integrity-operator'
make: go: Permission denied
namespace/openshift-file-integrity unchanged
WARNING: This will temporarily modify files in config/catalog
Replacing image reference in config/catalog/catalog-source.yaml
catalogsource.operators.coreos.com/file-integrity-operator created
Restoring image reference in config/catalog/catalog-source.yaml
Replacing namespace reference in config/catalog/operator-group.yaml
operatorgroup.operators.coreos.com/file-integrity-operator unchanged
Restoring namespace reference in config/catalog/operator-group.yaml
Replacing namespace reference in config/catalog/subscription.yaml
subscription.operators.coreos.com/file-integrity-operator-sub created
Restoring namespace reference in config/catalog/subscription.yaml

$ oc get pod -n openshift-marketplace
NAME                                                              READY   STATUS      RESTARTS   AGE
710019247d24634b1216a9d397c7d781343af8492511dd16f07cb07bec7zgrw   0/1     Completed   0          5h16m
76cc98311dbed880c2bd543b9e9c88448719db1970d3d75592062663f7dwbhf   0/1     Completed   0          92s
ce9fa6bf9dc9b59e2dfc9c05c0a38f2ca0586cd20766904bb0c746100amdsxw   0/1     Completed   0          5h16m
certified-operators-wd7c4                                         1/1     Running     0          120m
community-operators-rmdzn                                         1/1     Running     0          5h39m
file-integrity-operator-9n2hm                                     1/1     Running     0          95s
marketplace-operator-6ff786fc4-5tbpb                              1/1     Running     0          5h44m
qe-app-registry-rm7fk                                             1/1     Running     0          5h17m
redhat-marketplace-qlpjb                                          1/1     Running     0          5h39m
redhat-operators-vmqc6                                            1/1     Running     0          5h39m
$ oc project fio
Now using project "fio" on server "https://api.xiyuan13-2.alicloud-qe.devcluster.openshift.com:6443".
$ oc get ip
NAME            CSV                               APPROVAL    APPROVED
install-pnlrt   file-integrity-operator.v0.1.27   Automatic   true
$ oc get csv
NAME                              DISPLAY                            VERSION   REPLACES   PHASE
elasticsearch-operator.v5.5.0     OpenShift Elasticsearch Operator   5.5.0                Succeeded
file-integrity-operator.v0.1.27   File Integrity Operator            0.1.27               Succeeded
$ oc get pod
NAME                                       READY   STATUS    RESTARTS      AGE
file-integrity-operator-854744c4b8-zxxnf   1/1     Running   1 (73s ago)   116s

$ apiVersion: fileintegrity.openshift.io/v1alpha1
kind: FileIntegrity
metadata:
  name: example-fileintegrity
  namespace: fio
spec:
  config:
    gracePeriod: 20
    maxBackups: 5
  debug: true
> EOF
fileintegrity.fileintegrity.openshift.io/example-fileintegrity created
$ oc get daemonset -w
NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
aide-example-fileintegrity   6         6         3       6            3           <none>          53s
aide-example-fileintegrity   6         6         4       6            4           <none>          67s
aide-example-fileintegrity   6         6         5       6            5           <none>          84s
aide-example-fileintegrity   6         6         6       6            6           <none>          88s
^C[xiyuan@MiWiFi-RA69-srv file-integrity-operator]$ oc get pod
NAME                                       READY   STATUS    RESTARTS        AGE
aide-example-fileintegrity-7jtrt           1/1     Running   0               93s
aide-example-fileintegrity-7p2cc           1/1     Running   0               92s
aide-example-fileintegrity-h8hg8           1/1     Running   0               93s
aide-example-fileintegrity-pt492           1/1     Running   0               93s
aide-example-fileintegrity-r67wz           1/1     Running   0               92s
aide-example-fileintegrity-st762           1/1     Running   0               92s
file-integrity-operator-854744c4b8-zxxnf   1/1     Running   1 (7m58s ago)   8m41s
$ oc get fileintegrity example-fileintegrity -o=jsonpath={.status}
{"phase":"Active"}
$ oc get fileintegritynodestatus
NAME                                                             NODE                                       STATUS
example-fileintegrity-xiyuan13-2-h48ww-master-0                  xiyuan13-2-h48ww-master-0                  Succeeded
example-fileintegrity-xiyuan13-2-h48ww-master-1                  xiyuan13-2-h48ww-master-1                  Succeeded
example-fileintegrity-xiyuan13-2-h48ww-master-2                  xiyuan13-2-h48ww-master-2                  Succeeded
example-fileintegrity-xiyuan13-2-h48ww-worker-us-east-1a-mmc97   xiyuan13-2-h48ww-worker-us-east-1a-mmc97   Succeeded
example-fileintegrity-xiyuan13-2-h48ww-worker-us-east-1b-2dcpd   xiyuan13-2-h48ww-worker-us-east-1b-2dcpd   Succeeded
example-fileintegrity-xiyuan13-2-h48ww-worker-us-east-1b-v559b   xiyuan13-2-h48ww-worker-us-east-1b-v559b   Succeeded

Comment 7 xiyuan 2022-07-19 07:13:20 UTC
verification pass with payload 4.12.0-0.nightly-2022-07-17-215842 and file-integrity-operator.v0.1.28
1. install file-integrity-operator.v0.1.28 in namespace fio:
$ oc project fio
Now using project "fio" on server "https://api.xiyuan19-2.alicloud-qe.devcluster.openshift.com:6443".
$ oc get ip
NAME            CSV                               APPROVAL    APPROVED
install-9gljh   file-integrity-operator.v0.1.28   Automatic   true
$ oc get csv
NAME                              DISPLAY                            VERSION   REPLACES   PHASE
elasticsearch-operator.v5.5.0     OpenShift Elasticsearch Operator   5.5.0                Succeeded
file-integrity-operator.v0.1.28   File Integrity Operator            0.1.28               Succeeded
$ oc get pod
NAME                                       READY   STATUS    RESTARTS      AGE
file-integrity-operator-68b76f9dbf-pt46j   1/1     Running   1 (98s ago)   104s
2. create fileintegrity
$ oc apply -f -<<EOF
apiVersion: fileintegrity.openshift.io/v1alpha1
kind: FileIntegrity
metadata:
  name: example-fileintegrity
spec:
  config:
    gracePeriod: 20
    maxBackups: 5
  debug: true
EOF
fileintegrity.fileintegrity.openshift.io/example-fileintegrity created
$ oc get daemonset -w
NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
aide-example-fileintegrity   6         6         6       6            6           <none>          10s
^C$ oc get pod
NAME                                       READY   STATUS    RESTARTS        AGE
aide-example-fileintegrity-4t5qn           1/1     Running   0               16s
aide-example-fileintegrity-9slls           1/1     Running   0               16s
aide-example-fileintegrity-hj9fc           1/1     Running   0               16s
aide-example-fileintegrity-j8xkn           1/1     Running   0               16s
aide-example-fileintegrity-jtbc8           1/1     Running   0               16s
aide-example-fileintegrity-nws5h           1/1     Running   0               16s
file-integrity-operator-68b76f9dbf-pt46j   1/1     Running   1 (4m30s ago)   4m36s
$ oc get fileintegrity example-fileintegrity -o=jsonpath={.status}
{"phase":"Active"}

Comment 10 errata-xmlrpc 2022-08-02 08:17:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift File Integrity Operator bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:5538


Note You need to log in before you can comment on or make changes to this bug.