Bug 1861917 - installs lagging due to unable to validate against any security context constraint [NEEDINFO]
Summary: installs lagging due to unable to validate against any security context const...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-controller-manager
Version: 4.5
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.6.0
Assignee: Tomáš Nožička
QA Contact: RamaKasturi
URL:
Whiteboard: LifecycleReset
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-29 21:38 UTC by Abhinav Dahiya
Modified: 2020-10-27 16:21 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-27 16:21:22 UTC
Target Upstream Version:
Embargoed:
mfojtik: needinfo?


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-kube-controller-manager-operator pull 441 0 None closed Bug 1867494: Explicitly use internal LB for KCM and CPC 2021-02-18 14:14:16 UTC
Github openshift cluster-kube-controller-manager-operator pull 443 0 None closed bug 1861917: add render --cluster-policy-controller-image to add that controller to bootstrap yaml 2021-02-18 14:14:17 UTC
Github openshift cluster-kube-controller-manager-operator pull 446 0 None closed Bug 1861917: Render cluster-policy-controller for bootstrap 2021-02-18 14:14:17 UTC
Github openshift cluster-kube-controller-manager-operator pull 449 0 None closed Bug 1861917: Fix bootstrap cpc 2021-02-18 14:14:17 UTC
Github openshift cluster-kube-controller-manager-operator pull 451 0 None closed Bug 1861917: Enforce cpc config 2021-02-18 14:14:16 UTC
Github openshift cluster-kube-controller-manager-operator pull 453 0 None closed Bug 1861917: Fix bootstrap cpc config file location, certs and RBAC 2021-02-18 14:14:17 UTC
Github openshift installer pull 4131 0 None closed bug 1861917: bootkube: add image for cluster-policy-controller 2021-02-18 14:14:17 UTC
Github openshift installer pull 4178 0 None closed Bug 1861917: Add cpc config to bootstrap 2021-02-18 14:14:17 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 16:21:49 UTC

Description Abhinav Dahiya 2020-07-29 21:38:26 UTC
Description of problem:

During installs I'm seeing operators pods failing to be accepted due an error

```
Warning  FailedCreate      6m26s (x17 over 11m)  replicaset-controller  Error creating: pods "machine-api-operator-7787b5cbc6-" is forbidden: unable to validate against any security context constraint: []
```

This causes all the pods creation to cause delays from 5 to 11mins..


Version-Release number of selected component (if applicable):

4.5.z and 4.6

How reproducible:

All clusters.

Steps to Reproduce:

If you look at any CI job you can easily see this error.

```
EVENTS_URL=<events.json URL from CI artifcats>
curl -s "$EVENTS_URL" | jq '.items[] | select(.involvedObject.kind == "ReplicaSet") | select (.message | contains("unable to validate against any security context constraint")) | {name: .metadata.name, namespace: .metadata.namespace, count: .count, firstTimestamp: .firstTimestamp, lastTimestamp: .lastTimestamp}' 
```

Some examples are:

> https://storage.googleapis.com/origin-ci-test/logs/release-openshift-ocp-installer-e2e-azure-4.5/1285719373872369664/artifacts/e2e-azure/events.json

```
{
  "name": "cloud-credential-operator-74bcd7cf6.1623e9b1b641f5e5",
  "namespace": "openshift-cloud-credential-operator",
  "count": 10,
  "firstTimestamp": "2020-07-21T23:54:02Z",
  "lastTimestamp": "2020-07-21T23:59:29Z"
}
{
  "name": "cluster-storage-operator-8c9bb9d97.1623e9b22a6e69d3",
  "namespace": "openshift-cluster-storage-operator",
  "count": 7,
  "firstTimestamp": "2020-07-21T23:54:04Z",
  "lastTimestamp": "2020-07-21T23:59:27Z"
}
{
  "name": "cluster-image-registry-operator-74cd744888.1623e9b65ff9d56c",
  "namespace": "openshift-image-registry",
  "count": 17,
  "firstTimestamp": "2020-07-21T23:54:22Z",
  "lastTimestamp": "2020-07-21T23:59:50Z"
}
{
  "name": "ingress-operator-5ddffb844c.1623e9b546ea80dd",
  "namespace": "openshift-ingress-operator",
  "count": 17,
  "firstTimestamp": "2020-07-21T23:54:17Z",
  "lastTimestamp": "2020-07-21T23:59:45Z"
}
{
  "name": "insights-operator-85846d6568.1623e9b237f063eb",
  "namespace": "openshift-insights",
  "count": 17,
  "firstTimestamp": "2020-07-21T23:54:04Z",
  "lastTimestamp": "2020-07-21T23:59:32Z"
}
{
  "name": "machine-api-operator-7496cc5cc6.1623e9b967659d24",
  "namespace": "openshift-machine-api",
  "count": 17,
  "firstTimestamp": "2020-07-21T23:54:35Z",
  "lastTimestamp": "2020-07-22T00:00:03Z"
}
{
  "name": "marketplace-operator-bddd5df.1623e9b1aa7b6b3c",
  "namespace": "openshift-marketplace",
  "count": 13,
  "firstTimestamp": "2020-07-21T23:54:02Z",
  "lastTimestamp": "2020-07-21T23:59:29Z"
}
{
  "name": "cluster-monitoring-operator-d95dcd94b.1623e9b2323cc69d",
  "namespace": "openshift-monitoring",
  "count": 7,
  "firstTimestamp": "2020-07-21T23:54:04Z",
  "lastTimestamp": "2020-07-21T23:59:27Z"
}
```

> https://storage.googleapis.com/origin-ci-test/logs/release-openshift-ocp-installer-e2e-azure-4.5/1285813868605476864/artifacts/e2e-azure/events.json

```
{
  "name": "cloud-credential-operator-58b8bdd8b6.1623fdffb710e70e",
  "namespace": "openshift-cloud-credential-operator",
  "count": 10,
  "firstTimestamp": "2020-07-22T06:06:07Z",
  "lastTimestamp": "2020-07-22T06:11:34Z"
}
{
  "name": "cluster-storage-operator-5c99fd6bcf.1623fe000c5cbc92",
  "namespace": "openshift-cluster-storage-operator",
  "count": 7,
  "firstTimestamp": "2020-07-22T06:06:09Z",
  "lastTimestamp": "2020-07-22T06:11:31Z"
}
{
  "name": "cluster-image-registry-operator-7d9ffffc9d.1623fe047f6b0998",
  "namespace": "openshift-image-registry",
  "count": 17,
  "firstTimestamp": "2020-07-22T06:06:28Z",
  "lastTimestamp": "2020-07-22T06:11:56Z"
}
{
  "name": "ingress-operator-b46767f65.1623fe0366503bea",
  "namespace": "openshift-ingress-operator",
  "count": 17,
  "firstTimestamp": "2020-07-22T06:06:23Z",
  "lastTimestamp": "2020-07-22T06:11:51Z"
}
{
  "name": "insights-operator-7897b9856b.1623fe00486df264",
  "namespace": "openshift-insights",
  "count": 17,
  "firstTimestamp": "2020-07-22T06:06:10Z",
  "lastTimestamp": "2020-07-22T06:11:37Z"
}
{
  "name": "machine-api-operator-9876cc94b.1623fe06d4224053",
  "namespace": "openshift-machine-api",
  "count": 17,
  "firstTimestamp": "2020-07-22T06:06:38Z",
  "lastTimestamp": "2020-07-22T06:12:06Z"
}
{
  "name": "marketplace-operator-8f55fdd4c.1623fdffb8d2eb0b",
  "namespace": "openshift-marketplace",
  "count": 10,
  "firstTimestamp": "2020-07-22T06:06:07Z",
  "lastTimestamp": "2020-07-22T06:11:34Z"
}
{
  "name": "cluster-monitoring-operator-769bff7884.1623fe00307249fd",
  "namespace": "openshift-monitoring",
  "count": 7,
  "firstTimestamp": "2020-07-22T06:06:09Z",
  "lastTimestamp": "2020-07-22T06:11:32Z"
}
```

We should not be seeing these error because CVO creates the SCC policies and RBAC for operator pods before everything else.

Comment 1 Michal Fojtik 2020-08-28 21:59:26 UTC
This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale" and decreasing the severity/priority. If you have further information on the current state of the bug, please update it, otherwise this bug can be closed in about 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant.

Comment 2 Abhinav Dahiya 2020-08-28 22:04:50 UTC
The team has no comment on this bug since the initial report. So marking this stale or even decreasing the priority is incorrect in my opinion as it looks like the team never actually did anything to look into this.

Comment 3 Abhinav Dahiya 2020-08-28 22:07:34 UTC
This is still reproducible in CI from a run from today https://prow.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-gcp-4.6/1299416934579703808

EVENTS_URL="https://storage.googleapis.com/origin-ci-test/logs/release-openshift-ocp-installer-e2e-gcp-4.6/1299416934579703808/artifacts/e2e-gcp/events.json"
curl -s "$EVENTS_URL" | jq '.items[] | select(.involvedObject.kind == "ReplicaSet") | select (.message | contains("unable to validate against any security context constraint")) | {name: .metadata.name, namespace: .metadata.namespace, count: .count, firstTimestamp: .firstTimestamp, lastTimestamp: .lastTimestamp}' | xclip -sel c -i


{
  "name": "cloud-credential-operator-577ffdc9f8.162f831c7c00ca1b",
  "namespace": "openshift-cloud-credential-operator",
  "count": 17,
  "firstTimestamp": "2020-08-28T18:49:11Z",
  "lastTimestamp": "2020-08-28T18:54:39Z"
}
{
  "name": "cloud-credential-operator-577ffdc9f8.162f838de9bfd00d",
  "namespace": "openshift-cloud-credential-operator",
  "count": 15,
  "firstTimestamp": "2020-08-28T18:57:18Z",
  "lastTimestamp": "2020-08-28T18:58:40Z"
}
{
  "name": "cluster-samples-operator-5b758b645b.162f83a08ec78abb",
  "namespace": "openshift-cluster-samples-operator",
  "count": 10,
  "firstTimestamp": "2020-08-28T18:58:38Z",
  "lastTimestamp": "2020-08-28T18:58:40Z"
}
{
  "name": "csi-snapshot-controller-5779f584d5.162f836f9ba3a635",
  "namespace": "openshift-cluster-storage-operator",
  "count": 15,
  "firstTimestamp": "2020-08-28T18:55:08Z",
  "lastTimestamp": "2020-08-28T18:56:30Z"
}
{
  "name": "csi-snapshot-controller-5779f584d5.162f838dea1d4090",
  "namespace": "openshift-cluster-storage-operator",
  "count": 15,
  "firstTimestamp": "2020-08-28T18:57:18Z",
  "lastTimestamp": "2020-08-28T18:58:40Z"
}
{
  "name": "cluster-image-registry-operator-795848d64b.162f831e41818a73",
  "namespace": "openshift-image-registry",
  "count": 17,
  "firstTimestamp": "2020-08-28T18:49:18Z",
  "lastTimestamp": "2020-08-28T18:54:46Z"
}
{
  "name": "cluster-image-registry-operator-795848d64b.162f838de9ee410f",
  "namespace": "openshift-image-registry",
  "count": 15,
  "firstTimestamp": "2020-08-28T18:57:18Z",
  "lastTimestamp": "2020-08-28T18:58:40Z"
}
{
  "name": "ingress-operator-577b779c44.162f831dfe85fe1c",
  "namespace": "openshift-ingress-operator",
  "count": 17,
  "firstTimestamp": "2020-08-28T18:49:17Z",
  "lastTimestamp": "2020-08-28T18:54:45Z"
}
{
  "name": "ingress-operator-577b779c44.162f838de9ddade4",
  "namespace": "openshift-ingress-operator",
  "count": 15,
  "firstTimestamp": "2020-08-28T18:57:18Z",
  "lastTimestamp": "2020-08-28T18:58:40Z"
}
{
  "name": "insights-operator-8896789cf.162f831e38bfabfe",
  "namespace": "openshift-insights",
  "count": 17,
  "firstTimestamp": "2020-08-28T18:49:18Z",
  "lastTimestamp": "2020-08-28T18:54:46Z"
}
{
  "name": "insights-operator-8896789cf.162f838de9dbd32f",
  "namespace": "openshift-insights",
  "count": 15,
  "firstTimestamp": "2020-08-28T18:57:18Z",
  "lastTimestamp": "2020-08-28T18:58:40Z"
}
{
  "name": "cluster-autoscaler-operator-65c4959c6f.162f831e2d24414e",
  "namespace": "openshift-machine-api",
  "count": 17,
  "firstTimestamp": "2020-08-28T18:49:18Z",
  "lastTimestamp": "2020-08-28T18:54:46Z"
}
{
  "name": "cluster-autoscaler-operator-65c4959c6f.162f838dec179200",
  "namespace": "openshift-machine-api",
  "count": 15,
  "firstTimestamp": "2020-08-28T18:57:18Z",
  "lastTimestamp": "2020-08-28T18:58:40Z"
}
{
  "name": "machine-api-operator-5cd4899dd7.162f831e92a1135b",
  "namespace": "openshift-machine-api",
  "count": 17,
  "firstTimestamp": "2020-08-28T18:49:20Z",
  "lastTimestamp": "2020-08-28T18:54:47Z"
}
{
  "name": "machine-api-operator-5cd4899dd7.162f838dec10a363",
  "namespace": "openshift-machine-api",
  "count": 15,
  "firstTimestamp": "2020-08-28T18:57:18Z",
  "lastTimestamp": "2020-08-28T18:58:40Z"
}
{
  "name": "marketplace-operator-678cb86df8.162f831ca509a38a",
  "namespace": "openshift-marketplace",
  "count": 7,
  "firstTimestamp": "2020-08-28T18:49:11Z",
  "lastTimestamp": "2020-08-28T18:54:34Z"
}
{
  "name": "marketplace-operator-678cb86df8.162f838ded2cc26d",
  "namespace": "openshift-marketplace",
  "count": 15,
  "firstTimestamp": "2020-08-28T18:57:18Z",
  "lastTimestamp": "2020-08-28T18:58:40Z"
}
{
  "name": "cluster-monitoring-operator-7c486d94fb.162f831c89199357",
  "namespace": "openshift-monitoring",
  "count": 17,
  "firstTimestamp": "2020-08-28T18:49:11Z",
  "lastTimestamp": "2020-08-28T18:54:39Z"
}
{
  "name": "cluster-monitoring-operator-7c486d94fb.162f838dec19e917",
  "namespace": "openshift-monitoring",
  "count": 15,
  "firstTimestamp": "2020-08-28T18:57:18Z",
  "lastTimestamp": "2020-08-28T18:58:40Z"
}

Comment 4 Michal Fojtik 2020-08-28 22:59:30 UTC
The LifecycleStale keyword was removed because the bug got commented on recently.
The bug assignee was notified.

Comment 5 Stefan Schimanski 2020-08-31 13:11:57 UTC
I have checked all pods with events in https://bugzilla.redhat.com/show_bug.cgi?id=1861917#c3:

  All of them use the "restricted" SCC.

Doesn't look like a coincidence.

Comment 6 Venkata Siva Teja Areti 2020-09-01 22:08:32 UTC
My initial theory on this..

Specific run that I am debugging

origin-ci-test/logs/release-openshift-ocp-installer-e2e-gcp-4.6/1299416934579703808/artifacts/e2e-gcp

Event that I am tracing

```
❯ cat all/e2e-gcp/events.json | jq '.items[] | select(.involvedObject.kind == "ReplicaSet") | select (.message | contains("unable to validate against any security context constraint")) |  select(.metadata.namespace == "openshift-monitoring") |{name: .metadata.name, namespace: .metadata.namespace, count: .count, firstTimestamp: .firstTimestamp, lastTimestamp: .lastTimestamp}' 
{
  "name": "cluster-monitoring-operator-7c486d94fb.162f831c89199357",
  "namespace": "openshift-monitoring",
  "count": 17,
  "firstTimestamp": "2020-08-28T18:49:11Z",
  "lastTimestamp": "2020-08-28T18:54:39Z"
}
{
  "name": "cluster-monitoring-operator-7c486d94fb.162f838dec19e917",
  "namespace": "openshift-monitoring",
  "count": 15,
  "firstTimestamp": "2020-08-28T18:57:18Z",
  "lastTimestamp": "2020-08-28T18:58:40Z"
}
```

Restricted SCC is created at `creationTimestamp: "2020-08-28T18:49:11Z"`

What is the user/service account that created cluster monitoring operator? 
It is created by cluster version operator.

What is the service account used by this operator?
default

What are the SCCs that this service account can access?
restricted

cluster-version-operator uses default service account to create cluster-monitoring operator. Default service account must get restricted scc. cluster monitoring operator deployment has empty security context. For empty security context, the policy as stated in documentation

	When a container or pod does not request a user ID under which it should be run, the effective UID depends on the SCC that emits this pod. Because restricted SCC is granted to all authenticated users by default, it will be available to all users and service accounts and used in most cases. The restricted SCC uses MustRunAsRange strategy for constraining and defaulting the possible values of the securityContext.runAsUser field. The admission plug-in will look for the openshift.io/sa.scc.uid-range annotation on the current project to populate range fields, as it does not provide this range. In the end, a container will have runAsUser equal to the first value of the range that is hard to predict because every project has different ranges.


openshift.io/sa.scc.uid-range annotation is added to cluster-monitoring namespace at `time: "2020-08-28T18:59:21Z"` by cluster-policy-controller.

From above log, the last instance of this failure is seen on the namespace at `"lastTimestamp": "2020-08-28T18:58:40Z"`. The delay in adding the annotation could be causing this message

To verify this, let's check another component.

```
❯ cat all/e2e-gcp/events.json | jq '.items[] | select(.involvedObject.kind == "ReplicaSet") | select (.message | contains("unable to validate against any security context constraint")) |  select(.metadata.namespace == "openshift-ingress-operator") |{name: .metadata.name, namespace: .metadata.namespace, count: .count, firstTimestamp: .firstTimestamp, lastTimestamp: .lastTimestamp}' 
{
  "name": "ingress-operator-577b779c44.162f831dfe85fe1c",
  "namespace": "openshift-ingress-operator",
  "count": 17,
  "firstTimestamp": "2020-08-28T18:49:17Z",
  "lastTimestamp": "2020-08-28T18:54:45Z"
}
{
  "name": "ingress-operator-577b779c44.162f838de9ddade4",
  "namespace": "openshift-ingress-operator",
  "count": 15,
  "firstTimestamp": "2020-08-28T18:57:18Z",
  "lastTimestamp": "2020-08-28T18:58:40Z"
}
```

OpenShift ingress operator is created by cluster-version-operator at `time: "2020-08-28T18:49:17Z"`. There is no security context mentioned in the deployment spec. So, we must look the annotations on the namespace.

the ranges are annotated by the cluster-policy-controller at `"2020-08-28T18:59:22Z"`. That timestamp is after the last time the validation error was seen.

From event filter, I could see the first time cluster-policy-controller became leader is `18:59:11`

Cluster policy controller runs in parallel with kube-controller-manager after bootstrap. 

time="2020-08-28T19:00:39Z" level=debug msg="Bootstrap status: complete"

It looks like cluster-policy-controller was only started when bootstrap was close to complete.

Then why are the clusteroperators created before the bootstrap is complete?

I am guessing it has something to do with this PR? https://github.com/openshift/cluster-version-operator/commit/2a469e37c1c10c7a6cc4dd71ad264eff89913eb5

Comment 7 Maciej Szulik 2020-09-02 14:28:58 UTC
From a slack discussion we've decided to do both of the following:
1. start cluster-policy-controller in bootstrap
2. use internal LB for cluster-policy-controller

1. requires only updating kcm-o to include cpc container
2. requires changing cpc to be able to pass --master and then update kcm-o accordingly.

Comment 8 David Eads 2020-09-04 13:14:00 UTC
still need the last bit to render the container in the bootstrap pod.

Comment 13 RamaKasturi 2020-09-15 10:12:16 UTC
Clearing the need info as the same has been provided in slack, also based on above comment moving the bug back to assigned state.

Comment 15 RamaKasturi 2020-09-21 08:53:34 UTC
Moving the bug back to assigned because i still see a crash in the cluster-policy-controller.log file as below.

Log file created at: 2020/09/21 05:37:13
Running on machine: ip-10-0-11-229
Binary: Built with gc go1.14.7 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
F0921 05:37:13.562230       1 cmd.go:55] open /etc/kubernetes/config//assets/kube-controller-manager-bootstrap/cpc-config: no such file or directory
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc0001a1301, 0xc0005a4000, 0x95, 0xeb)
        k8s.io/klog/v2.0/klog.go:996 +0xb8
k8s.io/klog/v2.(*loggingT).output(0x360a420, 0xc000000003, 0x0, 0x0, 0xc0005822a0, 0x353b96e, 0x6, 0x37, 0x17b6d00)
        k8s.io/klog/v2.0/klog.go:945 +0x19d
k8s.io/klog/v2.(*loggingT).printDepth(0x360a420, 0xc000000003, 0x0, 0x0, 0x1, 0xc000697b88, 0x1, 0x1)
        k8s.io/klog/v2.0/klog.go:718 +0x15e
k8s.io/klog/v2.(*loggingT).print(...)
        k8s.io/klog/v2.0/klog.go:703
k8s.io/klog/v2.Fatal(...)
        k8s.io/klog/v2.0/klog.go:1443
github.com/openshift/cluster-policy-controller/pkg/cmd/cluster-policy-controller.NewClusterPolicyControllerCommand.func1(0xc0001482c0, 0xc0000aa0f0, 0x0, 0x5)
        github.com/openshift/cluster-policy-controller/pkg/cmd/cluster-policy-controller/cmd.go:55 +0x3e5
github.com/spf13/cobra.(*Command).execute(0xc0001482c0, 0xc0000aa0a0, 0x5, 0x5, 0xc0001482c0, 0xc0000aa0a0)
        github.com/spf13/cobra.0/command.go:846 +0x29d
github.com/spf13/cobra.(*Command).ExecuteC(0xc000148000, 0xc000148000, 0x0, 0x0)
        github.com/spf13/cobra.0/command.go:950 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
        github.com/spf13/cobra.0/command.go:887
main.main()
        github.com/openshift/cluster-policy-controller/cmd/cluster-policy-controller/main.go:67 +0x2cc

Comment 17 RamaKasturi 2020-09-23 12:14:16 UTC
Hi Tomas,

   Tried verifying the bug again and this time i do not see any crash related to cluster-policy-controller but do see errors related to  "namespace_scc_allocation_controller.go" also a crash as below in the cluster-policy-controller.go log file. Can you please help check the same ? Thanks !!

[core@ip-10-0-23-53 ~]$ vi /var/log/bootstrap-control-plane/cluster-policy-controller.log 
[core@ip-10-0-23-53 ~]$ cat  /var/log/bootstrap-control-plane/cluster-policy-controller.log | grep "E0"
E0923 10:53:48.265108       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io)
E0923 10:53:48.283894       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 10:53:48.324639       1 namespace_scc_allocation_controller.go:258] rangeallocations.security.openshift.io "scc-uid" is forbidden: User "system:serviceaccount:openshift-infra:namespace-security-allocation-controller" cannot get resource "rangeallocations" in API group "security.openshift.io" at the cluster scope
E0923 10:53:49.521481       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 10:53:49.768559       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io)
E0923 10:53:51.524213       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 10:53:51.918415       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ClusterResourceQuota: failed to list *v1.ClusterResourceQuota: the server could not find the requested resource (get clusterresourcequotas.quota.openshift.io)
E0923 10:53:56.798612       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 10:53:58.372575       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:54:08.339319       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:54:09.311320       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 10:54:18.393299       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:54:28.345494       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:54:32.979585       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 10:54:38.381531       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:54:48.383548       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:54:58.411687       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:55:05.326135       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 10:55:08.335222       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:55:18.336587       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:55:28.335302       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:55:38.338179       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:55:48.084496       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 10:55:48.340012       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:55:58.334185       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:56:08.332142       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:56:18.336497       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:56:28.339688       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:56:38.341022       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:56:45.189966       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 10:56:48.338290       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:56:58.338209       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:57:08.337998       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:57:18.332250       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:57:20.983052       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 10:57:28.331742       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:57:38.334026       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:57:48.337327       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:57:58.337233       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:58:08.337462       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:58:18.337001       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:58:18.797095       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 10:58:28.337135       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:58:38.337125       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:58:48.336420       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:58:48.337833       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:58:51.013496       1 reconciliation_controller.go:123] initial monitor sync has error: [couldn't start monitor for resource "cloudcredential.openshift.io/v1, Resource=credentialsrequests": unable to monitor quota for resource "cloudcredential.openshift.io/v1, Resource=credentialsrequests", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=probes": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=probes", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=catalogsources": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=catalogsources", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machinehealthchecks": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machinehealthchecks", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=prometheuses": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=prometheuses", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=installplans": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=installplans", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=thanosrulers": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=thanosrulers", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=prometheusrules": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=prometheusrules", couldn't start monitor for resource "controlplane.operator.openshift.io/v1alpha1, Resource=podnetworkconnectivitychecks": unable to monitor quota for resource "controlplane.operator.openshift.io/v1alpha1, Resource=podnetworkconnectivitychecks", couldn't start monitor for resource "operator.openshift.io/v1, Resource=ingresscontrollers": unable to monitor quota for resource "operator.openshift.io/v1, Resource=ingresscontrollers", couldn't start monitor for resource "tuned.openshift.io/v1, Resource=profiles": unable to monitor quota for resource "tuned.openshift.io/v1, Resource=profiles", couldn't start monitor for resource "tuned.openshift.io/v1, Resource=tuneds": unable to monitor quota for resource "tuned.openshift.io/v1, Resource=tuneds", couldn't start monitor for resource "metal3.io/v1alpha1, Resource=baremetalhosts": unable to monitor quota for resource "metal3.io/v1alpha1, Resource=baremetalhosts", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=clusterserviceversions": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=clusterserviceversions", couldn't start monitor for resource "ingress.operator.openshift.io/v1, Resource=dnsrecords": unable to monitor quota for resource "ingress.operator.openshift.io/v1, Resource=dnsrecords", couldn't start monitor for resource "network.operator.openshift.io/v1, Resource=operatorpkis": unable to monitor quota for resource "network.operator.openshift.io/v1, Resource=operatorpkis", couldn't start monitor for resource "operators.coreos.com/v1, Resource=operatorgroups": unable to monitor quota for resource "operators.coreos.com/v1, Resource=operatorgroups", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machinesets": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machinesets", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=servicemonitors": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=servicemonitors", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=podmonitors": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=podmonitors", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=subscriptions": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=subscriptions", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machines": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machines", couldn't start monitor for resource "autoscaling.openshift.io/v1beta1, Resource=machineautoscalers": unable to monitor quota for resource "autoscaling.openshift.io/v1beta1, Resource=machineautoscalers", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=alertmanagers": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=alertmanagers"]
E0923 10:58:51.091479       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 10:58:51.127427       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:58:52.244447       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 10:58:54.142648       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 10:58:58.782263       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 10:59:01.143630       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:59:08.599019       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 10:59:11.143640       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:59:21.143714       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:59:30.200689       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 10:59:31.143713       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:59:41.142843       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 10:59:51.145148       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:00:01.143121       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:00:06.256922       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 11:00:11.139495       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:00:21.140082       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:00:31.142606       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:00:41.142816       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:00:51.150664       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:01:01.142623       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:01:05.627751       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 11:01:11.143396       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:01:21.142007       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:01:31.143061       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:01:35.948090       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 11:01:41.143566       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:01:51.135907       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:02:01.130057       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:02:11.172627       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:02:21.129892       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:02:21.754965       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 11:02:31.129750       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:02:41.131712       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:02:51.132510       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:03:01.129556       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:03:08.252595       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 11:03:11.284385       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:03:21.129059       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:03:31.129314       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:03:41.129340       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:03:45.966134       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 11:03:51.154600       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:03:51.162346       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:04:14.713086       1 reconciliation_controller.go:117] initial discovery check failure, continuing and counting on future sync update: unable to retrieve the complete list of server APIs: packages.operators.coreos.com/v1: the server is currently unable to handle the request
E0923 11:04:14.714426       1 reconciliation_controller.go:123] initial monitor sync has error: [couldn't start monitor for resource "metal3.io/v1alpha1, Resource=baremetalhosts": unable to monitor quota for resource "metal3.io/v1alpha1, Resource=baremetalhosts", couldn't start monitor for resource "whereabouts.cni.cncf.io/v1alpha1, Resource=ippools": unable to monitor quota for resource "whereabouts.cni.cncf.io/v1alpha1, Resource=ippools", couldn't start monitor for resource "autoscaling.openshift.io/v1beta1, Resource=machineautoscalers": unable to monitor quota for resource "autoscaling.openshift.io/v1beta1, Resource=machineautoscalers", couldn't start monitor for resource "tuned.openshift.io/v1, Resource=profiles": unable to monitor quota for resource "tuned.openshift.io/v1, Resource=profiles", couldn't start monitor for resource "whereabouts.cni.cncf.io/v1alpha1, Resource=overlappingrangeipreservations": unable to monitor quota for resource "whereabouts.cni.cncf.io/v1alpha1, Resource=overlappingrangeipreservations", couldn't start monitor for resource "operator.openshift.io/v1, Resource=ingresscontrollers": unable to monitor quota for resource "operator.openshift.io/v1, Resource=ingresscontrollers", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=catalogsources": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=catalogsources", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machines": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machines", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=probes": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=probes", couldn't start monitor for resource "operators.coreos.com/v1, Resource=operatorgroups": unable to monitor quota for resource "operators.coreos.com/v1, Resource=operatorgroups", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=clusterserviceversions": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=clusterserviceversions", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=thanosrulers": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=thanosrulers", couldn't start monitor for resource "cloudcredential.openshift.io/v1, Resource=credentialsrequests": unable to monitor quota for resource "cloudcredential.openshift.io/v1, Resource=credentialsrequests", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=prometheuses": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=prometheuses", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=installplans": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=installplans", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machinesets": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machinesets", couldn't start monitor for resource "network.operator.openshift.io/v1, Resource=operatorpkis": unable to monitor quota for resource "network.operator.openshift.io/v1, Resource=operatorpkis", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=alertmanagers": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=alertmanagers", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=prometheusrules": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=prometheusrules", couldn't start monitor for resource "operators.coreos.com/v1alpha1, Resource=subscriptions": unable to monitor quota for resource "operators.coreos.com/v1alpha1, Resource=subscriptions", couldn't start monitor for resource "tuned.openshift.io/v1, Resource=tuneds": unable to monitor quota for resource "tuned.openshift.io/v1, Resource=tuneds", couldn't start monitor for resource "ingress.operator.openshift.io/v1, Resource=dnsrecords": unable to monitor quota for resource "ingress.operator.openshift.io/v1, Resource=dnsrecords", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=podmonitors": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=podmonitors", couldn't start monitor for resource "network.openshift.io/v1, Resource=egressnetworkpolicies": unable to monitor quota for resource "network.openshift.io/v1, Resource=egressnetworkpolicies", couldn't start monitor for resource "machine.openshift.io/v1beta1, Resource=machinehealthchecks": unable to monitor quota for resource "machine.openshift.io/v1beta1, Resource=machinehealthchecks", couldn't start monitor for resource "k8s.cni.cncf.io/v1, Resource=network-attachment-definitions": unable to monitor quota for resource "k8s.cni.cncf.io/v1, Resource=network-attachment-definitions", couldn't start monitor for resource "monitoring.coreos.com/v1, Resource=servicemonitors": unable to monitor quota for resource "monitoring.coreos.com/v1, Resource=servicemonitors", couldn't start monitor for resource "snapshot.storage.k8s.io/v1beta1, Resource=volumesnapshots": unable to monitor quota for resource "snapshot.storage.k8s.io/v1beta1, Resource=volumesnapshots", couldn't start monitor for resource "controlplane.operator.openshift.io/v1alpha1, Resource=podnetworkconnectivitychecks": unable to monitor quota for resource "controlplane.operator.openshift.io/v1alpha1, Resource=podnetworkconnectivitychecks"]
E0923 11:04:14.758640       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 11:04:14.918018       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:04:16.016649       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 11:04:18.200277       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 11:04:22.072482       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 11:04:24.920078       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:04:31.132968       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 11:04:34.920350       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:04:44.942640       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:04:54.398385       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 11:04:54.920130       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:05:04.931593       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:05:14.919904       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:05:24.921281       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:05:30.603284       1 reflector.go:127] k8s.io/client-go.0-rc.2/tools/cache/reflector.go:156: Failed to watch *v1.ImageStream: failed to list *v1.ImageStream: the server could not find the requested resource (get imagestreams.image.openshift.io)
E0923 11:05:34.919877       1 namespace_scc_allocation_controller.go:258] the server could not find the requested resource (post rangeallocations.security.openshift.io)
E0923 11:06:14.947616       1 namespace_scc_allocation_controller.go:258] the server is currently unable to handle the request (get rangeallocations.security.openshift.io scc-uid)

[core@ip-10-0-23-53 ~]$ cat  /var/log/bootstrap-control-plane/cluster-policy-controller.log | grep "F0"
F0923 10:58:48.337850       1 namespace_scc_allocation_controller.go:116] timed out waiting for the condition
F0923 11:03:51.162459       1 namespace_scc_allocation_controller.go:116] timed out waiting for the condition
goroutine 163 [running]:
k8s.io/klog/v2.stacks(0xc0005a5501, 0xc0005a6ff0, 0x6e, 0xe9)
        k8s.io/klog/v2.0/klog.go:996 +0xb8
k8s.io/klog/v2.(*loggingT).output(0x360a420, 0xc000000003, 0x0, 0x0, 0xc00086ad90, 0x353bd15, 0x26, 0x74, 0x2464900)
        k8s.io/klog/v2.0/klog.go:945 +0x19d
k8s.io/klog/v2.(*loggingT).printDepth(0x360a420, 0xc000000003, 0x0, 0x0, 0x1, 0xc000a41f38, 0x1, 0x1)
        k8s.io/klog/v2.0/klog.go:718 +0x15e
k8s.io/klog/v2.(*loggingT).print(...)
        k8s.io/klog/v2.0/klog.go:703
k8s.io/klog/v2.Fatal(...)
        k8s.io/klog/v2.0/klog.go:1443
github.com/openshift/cluster-policy-controller/pkg/security/controller.(*NamespaceSCCAllocationController).Run(0xc000c60000, 0xc0000b9bc0)
        github.com/openshift/cluster-policy-controller/pkg/security/controller/namespace_scc_allocation_controller.go:116 +0x21b
created by github.com/openshift/cluster-policy-controller/pkg/cmd/controller.RunNamespaceSecurityAllocationController
        github.com/openshift/cluster-policy-controller/pkg/cmd/controller/security.go:46 +0x5b4

Comment 18 Tomáš Nožička 2020-09-24 11:31:41 UTC
these are expected to fail as there is no openshift-apiserver yet

Comment 19 Tomáš Nožička 2020-09-24 11:49:38 UTC
the CPC is setup correctly but David says we need range allocations working there too, I suppose we can add the CRD under this BZ and keep it open

Comment 20 RamaKasturi 2020-09-29 10:47:43 UTC
Verified bug with the payload below and i see that cluster-policy-controller starts during bootstrap.
[ramakasturinarra@dhcp35-60 ~]$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.0-0.nightly-2020-09-28-212756   True        False         75m     Cluster version is 4.6.0-0.nightly-2020-09-28-212756


Before Fix:
==============
payload used: 4.6.0-0.nightly-2020-09-07-132110
1)Do not see cluster-poliy-controller when checked in crio logs.

[core@ip-10-0-6-63 ~]$ journalctl -u crio | less | grep cluster-policy-controller
[core@ip-10-0-6-63 ~]$ 

2) Do not see any log called cluster-policy-controller under /var/log/bootstrap-control-plane

[core@ip-10-0-6-63 ~]$ cd /var/log/bootstrap-control-plane/
[core@ip-10-0-6-63 bootstrap-control-plane]$ ls -l
total 5324
-rw-r--r--. 1 root root 4735297 Sep 14 13:08 kube-apiserver.log
-rw-r--r--. 1 root root  480886 Sep 14 13:07 kube-controller-manager.log
-rw-r--r--. 1 root root  229305 Sep 14 13:07 kube-scheduler.log

3) Time taken during install

time="2020-09-14T13:26:49Z" level=debug msg="Time elapsed per stage:"
time="2020-09-14T13:26:49Z" level=debug msg="    Infrastructure: 11m27s"
time="2020-09-14T13:26:49Z" level=debug msg="Bootstrap Complete: 9m11s"
time="2020-09-14T13:26:49Z" level=debug msg=" Cluster Operators: 19m9s"
time="2020-09-14T13:26:49Z" level=info msg="Time elapsed: 40m2s"

After Fix:
====================
payload used: [ramakasturinarra@dhcp35-60 ~]$ oc version
Client Version: 4.6.0-202009281501.p0-61364f0
Server Version: 4.6.0-0.nightly-2020-09-28-212756
Kubernetes Version: v1.19.0+e465e66


1) Logs from journalctl -u crio:

[core@ip-10-0-1-101 ~]$ journalctl -u crio | grep cluster-policy-controller
Sep 29 08:57:32 ip-10-0-1-101 crio[1935]: time="2020-09-29 08:57:32.957367955Z" level=info msg="Creating container: kube-system/bootstrap-kube-controller-manager-ip-10-0-1-101/cluster-policy-controller" id=6b06b175-22fd-4011-98e0-349dac31e0e5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
Sep 29 08:57:33 ip-10-0-1-101 crio[1935]: time="2020-09-29 08:57:33.136594200Z" level=info msg="Created container 7949c3417871bab466960507b8854662e910661372e0635d534fdd4a5ac8c35a: kube-system/bootstrap-kube-controller-manager-ip-10-0-1-101/cluster-policy-controller" id=6b06b175-22fd-4011-98e0-349dac31e0e5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
Sep 29 08:57:33 ip-10-0-1-101 crio[1935]: time="2020-09-29 08:57:33.150450402Z" level=info msg="Started container 7949c3417871bab466960507b8854662e910661372e0635d534fdd4a5ac8c35a: kube-system/bootstrap-kube-controller-manager-ip-10-0-1-101/cluster-policy-controller" id=fd048dae-5c42-452a-bbdc-e51223948a95 name=/runtime.v1alpha2.RuntimeService/StartContainer
Sep 29 09:02:35 ip-10-0-1-101 crio[1935]: time="2020-09-29 09:02:35.778293334Z" level=info msg="Creating container: kube-system/bootstrap-kube-controller-manager-ip-10-0-1-101/cluster-policy-controller" id=96f21f5d-9ce5-4e77-855b-9e02e8c02a1f name=/runtime.v1alpha2.RuntimeService/CreateContainer
Sep 29 09:02:35 ip-10-0-1-101 crio[1935]: time="2020-09-29 09:02:35.934702092Z" level=info msg="Created container 0649a03392f54a7e570efdc0ff2a8f9d6c5803910efcdeaffec7741f9da23031: kube-system/bootstrap-kube-controller-manager-ip-10-0-1-101/cluster-policy-controller" id=96f21f5d-9ce5-4e77-855b-9e02e8c02a1f name=/runtime.v1alpha2.RuntimeService/CreateContainer
Sep 29 09:02:35 ip-10-0-1-101 crio[1935]: time="2020-09-29 09:02:35.947106673Z" level=info msg="Started container 0649a03392f54a7e570efdc0ff2a8f9d6c5803910efcdeaffec7741f9da23031: kube-system/bootstrap-kube-controller-manager-ip-10-0-1-101/cluster-policy-controller" id=7dd9f142-3e98-4426-a6d5-659c2f847c4a name=/runtime.v1alpha2.RuntimeService/StartContainer
Sep 29 09:07:39 ip-10-0-1-101 crio[1935]: time="2020-09-29 09:07:39.196201111Z" level=info msg="Removed container 7949c3417871bab466960507b8854662e910661372e0635d534fdd4a5ac8c35a: kube-system/bootstrap-kube-controller-manager-ip-10-0-1-101/cluster-policy-controller" id=4139481c-2b46-4e22-9780-ece61431661e name=/runtime.v1alpha2.RuntimeService/RemoveContainer
Sep 29 09:07:48 ip-10-0-1-101 crio[1935]: time="2020-09-29 09:07:48.743434301Z" level=info msg="Creating container: kube-system/bootstrap-kube-controller-manager-ip-10-0-1-101/cluster-policy-controller" id=ea03be62-973d-4649-9419-9c1e74e4d71a name=/runtime.v1alpha2.RuntimeService/CreateContainer
Sep 29 09:07:48 ip-10-0-1-101 crio[1935]: time="2020-09-29 09:07:48.891086646Z" level=info msg="Created container 61e2981fd4bbf6a10a3d75522c0ba0ef87a0224183e483616d05acac110d3e90: kube-system/bootstrap-kube-controller-manager-ip-10-0-1-101/cluster-policy-controller" id=ea03be62-973d-4649-9419-9c1e74e4d71a name=/runtime.v1alpha2.RuntimeService/CreateContainer
Sep 29 09:07:48 ip-10-0-1-101 crio[1935]: time="2020-09-29 09:07:48.907100074Z" level=info msg="Started container 61e2981fd4bbf6a10a3d75522c0ba0ef87a0224183e483616d05acac110d3e90: kube-system/bootstrap-kube-controller-manager-ip-10-0-1-101/cluster-policy-controller" id=974623c3-1f3e-4bb0-97a7-a79ee8c0ed4e name=/runtime.v1alpha2.RuntimeService/StartContainer
Sep 29 09:08:30 ip-10-0-1-101 crio[1935]: time="2020-09-29 09:08:30.254110745Z" level=info msg="Stopped container 61e2981fd4bbf6a10a3d75522c0ba0ef87a0224183e483616d05acac110d3e90: kube-system/bootstrap-kube-controller-manager-ip-10-0-1-101/cluster-policy-controller" id=075812f7-7b1a-43d2-b223-6924f98b0a8e name=/runtime.v1alpha2.RuntimeService/StopContainer
Sep 29 09:08:30 ip-10-0-1-101 crio[1935]: time="2020-09-29 09:08:30.415933234Z" level=info msg="Removed container 0649a03392f54a7e570efdc0ff2a8f9d6c5803910efcdeaffec7741f9da23031: kube-system/bootstrap-kube-controller-manager-ip-10-0-1-101/cluster-policy-controller" id=e573fc1b-8c1e-416c-ad7d-8bbde072145f name=/runtime.v1alpha2.RuntimeService/RemoveContainer
Sep 29 09:14:32 ip-10-0-1-101 crio[1935]: time="2020-09-29 09:14:32.178703368Z" level=info msg="Removed container 61e2981fd4bbf6a10a3d75522c0ba0ef87a0224183e483616d05acac110d3e90: kube-system/bootstrap-kube-controller-manager-ip-10-0-1-101/cluster-policy-controller" id=d7e13bb4-c5f2-4efe-98b5-03eb16dc2e07 name=/runtime.v1alpha2.RuntimeService/RemoveContainer

2) see a log called cluster-policy-controller.log file:

[core@ip-10-0-1-101 ~]$ ls -l /var/log/bootstrap-control-plane/
total 7660
-rw-r--r--. 1 root root  669033 Sep 29 09:08 cluster-policy-controller.log
-rw-r--r--. 1 root root 6514752 Sep 29 09:09 kube-apiserver.log
-rw-r--r--. 1 root root  508290 Sep 29 09:07 kube-controller-manager.log
-rw-r--r--. 1 root root  140439 Sep 29 09:07 kube-scheduler.log


3) Time taken for install is less:

time="2020-09-29T09:29:41Z" level=debug msg="Time elapsed per stage:"
time="2020-09-29T09:29:41Z" level=debug msg="    Infrastructure: 6m14s"
time="2020-09-29T09:29:41Z" level=debug msg="Bootstrap Complete: 7m52s"
time="2020-09-29T09:29:41Z" level=debug msg=" Cluster Operators: 21m14s"
time="2020-09-29T09:29:41Z" level=info msg="Time elapsed: 35m27s"

For the errors seen in the cluster-policy-controller.log file raised a different bug which is https://bugzilla.redhat.com/show_bug.cgi?id=1883458

Based on the above moving the bug to verified state.

Comment 22 errata-xmlrpc 2020-10-27 16:21:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.