Bug 1584105 - Failed to provision serviceInstance due to unable group privilege escalation
Summary: Failed to provision serviceInstance due to unable group privilege escalation
Keywords:
Status: CLOSED DUPLICATE of bug 1610991
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Templates
Version: 3.7.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 3.11.0
Assignee: Ben Parees
QA Contact: XiuJuan Wang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-30 10:09 UTC by Daein Park
Modified: 2021-09-09 14:18 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-08-02 17:16:16 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Daein Park 2018-05-30 10:09:44 UTC
Description of problem:

the member user who joined the group bound with "cluster-admin" create the resources using custom template.
Then, the ServiceInstance was failed status with following error messages.

But no problem when the user bound directly with cluster-admin creates the resources via same template.

* From Web console overview page
~~~
The service failed. Provision call failed: buildconfigs "example" is forbidden: User "member" cannot get buildconfigs in project "example"
~~~

* From CLI
~~~
...
Status:
  Async Op In Progress:	false
  Conditions:
    Last Transition Time:		2018-05-30T09:17:29Z
    Message:				Provision call failed: buildconfigs "example" is forbidden: User "member" cannot get buildconfigs in project "example"
    Reason:				ProvisionCallFailed
    Status:				False
    Type:				Ready
    Last Transition Time:		2018-05-30T09:17:35Z
    Message:				Provision call failed: buildconfigs "example" is forbidden: User "member" cannot get buildconfigs in project "example"
    Reason:				ProvisionCallFailed
    Status:				True
    Type:				Failed
  Deprovision Status:			Required
  Orphan Mitigation In Progress:	false
  Reconciled Generation:		1
Events:
  FirstSeen	LastSeen	Count	From					SubObjectPath	Type		Reason			Message
  ---------	--------	-----	----					-------------	--------	------			-------
  16m		16m		1	service-catalog-controller-manager			Normal		Provisioning		The instance is being provisioned asynchronously
  16m		16m		2	service-catalog-controller-manager			Warning		ProvisionCallFailed	Error provisioning ServiceInstance of ClusterServiceClass (K8S: "57c16f69-63d3-11e8-9a58-123456abcdef" ExternalName: "create-hosting-php") at ClusterServiceBroker "template-service-broker": Status: 409; ErrorMessage: <nil>; Description: <nil>; ResponseError: <nil>
  16m		16m		1	service-catalog-controller-manager			Warning		ProvisionCallFailed	Provision call failed: buildconfigs "example" is forbidden: User "member" cannot get buildconfigs in project "example"
~~~


Version-Release number of selected component (if applicable):

# oc version
oc v3.7.23
kubernetes v1.7.6+a08f5eeb62
features: Basic-Auth GSSAPI Kerberos SPNEGO

How reproducible:

* In this case, based on htpasswd authentication.

* The custom template definition based on YAML is as follows.
~~~
apiVersion: v1
kind: Template
labels:
  template: example-template
metadata:
  annotations:
    iconClass: icon-php
    tags: hosting,php
  name: example-template
objects:
- apiVersion: v1
  kind: ImageStream
  metadata:
    labels:
      name: ${EXAMPLE}
    name: ${EXAMPLE}
- apiVersion: v1
  kind: BuildConfig
  metadata:
    labels:
      name: ${EXAMPLE}
    name: ${EXAMPLE}
  spec:
    output:
      to:
        kind: ImageStreamTag
        name: ${EXAMPLE}:latest
    source:
      git:
        ref: master
        uri: https://github.com/openshift/cakephp-ex.git
      type: Git
    strategy:
      sourceStrategy:
        forcePull: true
        from:
          kind: ImageStreamTag
          name: php:latest
          namespace: openshift
      type: Source
    triggers:
    - imageChange: {}
      type: ImageChange
    - type: ConfigChange
    - generic:
        secret: abcdefgh12345678
      type: Generic
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    labels:
      name: ${EXAMPLE}
    name: ${EXAMPLE}
  spec:
    replicas: 2
    selector:
      deploymentConfig: ${EXAMPLE}
    strategy:
      type: Rolling
    template:
      metadata:
        labels:
          deploymentConfig: ${EXAMPLE}
          name: ${EXAMPLE}
        name: ${EXAMPLE}
      spec:
        containers:
        - env:
          - name: TZ
            value: Asia/Tokyo
          image: ${EXAMPLE}
          imagePullPolicy: Always
          name: ${EXAMPLE}
          ports:
          - containerPort: 8080
            name: http
            protocol: TCP
    triggers:
    - imageChangeParams:
        automatic: true
        containerNames:
        - ${EXAMPLE}
        from:
          kind: ImageStreamTag
          name: ${EXAMPLE}:latest
      type: ImageChange
    - type: ConfigChange
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      description: Exposes and load balances the application pods
    labels:
      name: ${EXAMPLE}
    name: ${EXAMPLE}
  spec:
    ports:
    - name: 8080-tcp
      port: 8080
      targetPort: 8080
    selector:
      deploymentConfig: ${EXAMPLE}
parameters:
- description: example
  displayName: example
  name: EXAMPLE
  required: true
  value: example
~~~

* The grouping details are as follows
~~~
# oc whoami
system:admin

# oc new-project example

# oadm groups new example-group

# oadm policy add-cluster-role-to-group cluster-admin example-group

# htpasswd -b /etc/origin/master/htpasswd member redhat

# oadm groups add-users example-group member

# oc get group example-group
NAME             USERS
example-group    member

# oc login -u member -p redhat

# oc whoami
member

# oc auth can-i '*' '*'
yes
~~~

#1 Add to new member to the group after creating new group bound with cluster-admin cluster role.

#2 Create new project using the other user account, and create resources using custom template with above group member account.

#3 You can verify the error messages in the web console overview page or "oc describe serviceinstance -n your_project_name" CLI output


Steps to Reproduce:
1.
2.
3.

Actual results:
Failed provisioning ServiceInstance due to privilege escalation issue, though the member in the group binding with cluster-admin role.


Expected results:
Created the ServiceInstance successfully.


Additional info:

Comment 1 Daein Park 2018-06-01 04:23:00 UTC
This issue is reproducible by existing Redis (Ephemeral) template on OCPv3.7 and v3.9 web console either. 

~~~
The service failed. Provision call failed: deploymentconfigs "redis" is forbidden: User "member" cannot get deploymentconfigs in project "example"
~~~

Maybe the member of any group does not be evaluated his authorization properly while the ServiceInstance create through a template in the service catalog.

Comment 2 Erik Nelson 2018-08-02 17:08:47 UTC
https://bugzilla.redhat.com/show_bug.cgi?id=1584105 is the same bz against 3.9.z, this should get fixed by the PR posted there.

Comment 3 Ben Parees 2018-08-02 17:16:16 UTC

*** This bug has been marked as a duplicate of bug 1610991 ***


Note You need to log in before you can comment on or make changes to this bug.