Description of problem: If a custom project-request template is defined and a group definition is added to it, when the template is invoked due to a new-project request it fails with: 'Error from server (InternalError): Internal error occurred: the server could not find the requested resource' Version-Release number of selected component (if applicable): OCP 3.11 (tested with OCP 3.11.51) How reproducible: Always Steps to Reproduce: 1. Create a new template like this: apiVersion: template.openshift.io/v1 kind: Template metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"template.openshift.io/v1","kind":"Template","metadata":{"annotations":{},"creationTimestamp":null,"name":"project-request","namespace":"default"},"objects":[{"apiVersion":"project.openshift.io/v1","kind":"Project","metadata":{"annotations":{"openshift.io/description":"${PROJECT_DESCRIPTION}","openshift.io/display-name":"${PROJECT_DISPLAYNAME}","openshift.io/requester":"${PROJECT_REQUESTING_USER}"},"creationTimestamp":null,"name":"${PROJECT_NAME}"},"spec":{},"status":{}},{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{"openshift.io/description":"Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable."},"creationTimestamp":null,"name":"system:image-pullers","namespace":"${PROJECT_NAME}"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"system:image-puller"},"subjects":[{"apiGroup":"rbac.authorization.k8s.io","kind":"Group","name":"system:serviceaccounts:${PROJECT_NAME}"}]},{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{"openshift.io/description":"Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable."},"creationTimestamp":null,"name":"system:image-builders","namespace":"${PROJECT_NAME}"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"system:image-builder"},"subjects":[{"kind":"ServiceAccount","name":"builder","namespace":"${PROJECT_NAME}"}]},{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{"openshift.io/description":"Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disable."},"creationTimestamp":null,"name":"system:deployers","namespace":"${PROJECT_NAME}"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"system:deployer"},"subjects":[{"kind":"ServiceAccount","name":"deployer","namespace":"${PROJECT_NAME}"}]},{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"creationTimestamp":null,"name":"admin","namespace":"${PROJECT_NAME}"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"admin"},"subjects":[{"apiGroup":"rbac.authorization.k8s.io","kind":"User","name":"${PROJECT_ADMIN_USER}"}]},{"apiVersion":"v1","kind":"ResourceQuota","metadata":{"name":"standard-quota"},"spec":{"hard":{"cpu":"5","memory":"30Gi"}}},{"apiVersion":"v1","kind":"LimitRange","metadata":{"name":"standard-limits"},"spec":{"limits":[{"min":{"cpu":"10m","memory":"128Mi"},"type":"Pod"},{"default":{"cpu":"50m","memory":"328Mi"},"min":{"cpu":"20m","memory":"256Mi"},"type":"Container"}]}},{"apiVersion":"v1","kind":"ResourceQuota","metadata":{"name":"core-object-counts"},"spec":{"hard":{"configmaps":"20","persistentvolumeclaims":"0","pods":"100","replicationcontrollers":"80","services":"30"}}},{"apiVersion":"extensions/v1beta1","kind":"NetworkPolicy","metadata":{"name":"allow-same-and-default-namespace"},"spec":{"ingress":[{"from":[{"podSelector":{}}]},{"from":[{"namespaceSelector":{"matchLabels":{"name":"default"}}}]}]}}],"parameters":[{"name":"PROJECT_NAME"},{"name":"PROJECT_DISPLAYNAME"},{"name":"PROJECT_DESCRIPTION"},{"name":"PROJECT_ADMIN_USER"},{"name":"PROJECT_REQUESTING_USER"}]} creationTimestamp: null name: project-request objects: - apiVersion: project.openshift.io/v1 kind: Project metadata: annotations: openshift.io/description: ${PROJECT_DESCRIPTION} openshift.io/display-name: ${PROJECT_DISPLAYNAME} openshift.io/requester: ${PROJECT_REQUESTING_USER} creationTimestamp: null name: ${PROJECT_NAME} spec: {} status: {} - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. creationTimestamp: null name: system:image-pullers namespace: ${PROJECT_NAME} roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:image-puller subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:${PROJECT_NAME} - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. creationTimestamp: null name: system:image-builders namespace: ${PROJECT_NAME} roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:image-builder subjects: - kind: ServiceAccount name: builder namespace: ${PROJECT_NAME} - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disable. creationTimestamp: null name: system:deployers namespace: ${PROJECT_NAME} roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:deployer subjects: - kind: ServiceAccount name: deployer namespace: ${PROJECT_NAME} - apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: creationTimestamp: null name: admin namespace: ${PROJECT_NAME} roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: ${PROJECT_ADMIN_USER} - apiVersion: v1 kind: ResourceQuota metadata: name: standard-quota spec: hard: cpu: "5" memory: 30Gi - apiVersion: v1 kind: LimitRange metadata: name: standard-limits spec: limits: - default: cpu: 50m memory: 328Mi min: cpu: 20m memory: 256Mi type: Container - apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: "20" persistentvolumeclaims: "0" pods: "100" replicationcontrollers: "80" services: "30" - apiVersion: user.openshift.io/v1 kind: Group metadata: name: ${PROJECT_NAME} users: null parameters: - name: PROJECT_NAME - name: PROJECT_DISPLAYNAME - name: PROJECT_DESCRIPTION - name: PROJECT_ADMIN_USER - name: PROJECT_REQUESTING_USER $ oc project default $ oc create -f template.yaml 2. Edit projectConfig.projectRequestTemplate in /etc/origin/master/master-config.yaml to point to 'default/project-request'. 3. # /usr/local/bin/master-restart api api # /usr/local/bin/master-restart controllers controllers 4. oc new-project test Actual results: Error from server (InternalError): Internal error occurred: the server could not find the requested resource In the api logs: delegated.go:243] error creating items in requested project "test": the server could not find the requested resource Expected results: project being created. Additional info: Same template definition was tested in OCP 3.10.45 successfully.
Is it the network policy? That moved to networking.k8s.io/v1
Not sure what you meant, but even removing the NetworkPolicy object from the template it's the same behavior.
There's no NetworkPolicy in the current template as far as I can read. Can we get some confirmation about this bugzilla? It's been a long time since we opened it.
Have you tried creating each of those objects separately to make sure they are correct and particular apiVersion is available in your cluster? Template API should give you better error message though.
I have tried creating the template manually in 3.11 cluster and process/apply (except for permission errors I din't see one pointing to non-existing resource). But that was oc. The error comes from here: https://github.com/openshift/origin/blob/c3f94779c3a0a64daca27851ac5b3d88dee3f757/pkg/project/apiserver/registry/projectrequest/delegated/delegated.go#L243 What I find confusing is the kubectl.kubernetes.io/last-applied-configuration which doesn't match the template. Could you run: - `oc get template -n default project-request -o yaml` (The yaml provided is not a dump because it is missing e.g. namespace field and the last-applied-configuration looks strange / contains NetworkPolicy not in objects field.) - `oc version`
I've requested the information to customer and will let you know once we have it. Just to be sure, do you need us to create the objects individually or can you do it by yourself?
This bug hasn't had any engineering activity in the last ~30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale". If you have further information on the current state of the bug, please update it and remove the "LifecycleStale" keyword, otherwise this bug will be automatically closed in 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant.
This bug hasn't had any activity 7 days after it was marked as LifecycleStale, so we are closing this bug as WONTFIX. If you consider this bug still valuable, please reopen it or create new bug.
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days