Bug 1807109 - [RFE] Migrating workloads with OCP 3.x SSO integration
Summary: [RFE] Migrating workloads with OCP 3.x SSO integration
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Migration Toolkit for Containers
Classification: Red Hat
Component: General
Version: 1.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 1.5.0
Assignee: John Matthews
QA Contact: Xin jiang
Avital Pinnick
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-02-25 15:50 UTC by Luis Arizmendi
Modified: 2021-04-08 02:57 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-04-08 02:57:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Luis Arizmendi 2020-02-25 15:50:03 UTC
Description of problem:

When you migrate a workload that relies on Openshift 3.x SSO it could happen that it won't work in OCP 4. I found this while migrating a Prometheus workload (dedicated for app monitoring, not the one included in to monitor the cluster)

Probably it fails because there is a difference between OCP 3 and 4, as stated in the DOCs:

https://access.redhat.com/documentation/en-us/openshift_container_platform/4.3/html-single/migration/index

Unauthenticated access to discovery endpoints
In OpenShift Container Platform 3.11, an unauthenticated user could access the discovery endpoints (for example, /api/* and /apis/*). For security reasons, unauthenticated access to the discovery endpoints is no longer allowed in OpenShift Container Platform 4.3. If you do need to allow unauthenticated access, you can configure the RBAC settings as necessary; however, be sure to consider the security implications as this can expose internal cluster components to the external network.


It would be great if we can provide a documented workaround for this, either the RBAC config (warning about the security threads) or any solution.


Version-Release number of selected component (if applicable):
OCP 3.11 to OCP 4.3


How reproducible:
Always

Steps to Reproduce:
1. Deploy prometheus with SSO
2. Migrate
3. Try to access prometheus in OCP 4.3

Actual results:
SSO is not working

Expected results:
Not sure if it's a good idea to do something while migrating but it would be enough to mention this in the DOCs giving some options.


Additional info:

This is the error:

2020/02/25 14:53:25 provider.go:524: 404 GET https://oauth-openshift.apps.cluster-03c3.03c3.sandbox518.opentlc.com/apis/user.openshift.io/v1/users/~ {
  "paths": [
    "/apis",
    "/healthz",
    "/healthz/log",
    "/healthz/ping",
    "/livez",
    "/livez/log",
    "/livez/ping",
    "/metrics",
    "/readyz",
    "/readyz/log",
    "/readyz/ping",
    "/readyz/shutdown"
  ]
}
2020/02/25 14:53:25 oauthproxy.go:582: error redeeming code (client:10.131.0.7:37126): unable to retrieve email address for user from token: got 404 {
  "paths": [
    "/apis",
    "/healthz",
    "/healthz/log",
    "/healthz/ping",
    "/livez",
    "/livez/log",
    "/livez/ping",
    "/metrics",
    "/readyz",
    "/readyz/log",
    "/readyz/ping",
    "/readyz/shutdown"
  ]
}
2020/02/25 14:53:25 oauthproxy.go:399: ErrorPage 500 Internal Error Internal Error


These are the steps that I followed to deploy the app and the prometheus:

I used this example

https://labs.consol.de/development/2018/01/19/openshift_application_monitoring.html

Create a new project and deploy there the app:

oc new-project demoapplication

oc new-app -f https://raw.githubusercontent.com/ConSol/springboot-monitoring-example/master/templates/restservice_template.yaml -n demoapplication

Create a new project for Prometheus and deploy it there 

oc new-project prometheus

oc new-app -f https://raw.githubusercontent.com/ConSol/springboot-monitoring-example/master/templates/prometheus3.7_with_clusterrole.yaml -p NAMESPACE=prometheus

In this example I didn’t create any persistence because what we want to test is that prometheus continues receiving the metrics (PV migration is already done in previous tests with other apps)

Edit the prometheus configmap to include the app namespace (in three different places inside the YAML file), here an example:

$ oc edit cm prometheus -o yaml
…
...
      kubernetes_sd_configs:
      - role: pod
        namespaces:
          names:
          - prometheus
          - demoapplication
…
…

Allow namespace prometheus to access the app namespace:

oc policy add-role-to-user view system:serviceaccount:prometheus:prometheus -n demoapplication
Force the reconfig by deleting the pod:
 
oc delete pod prometheus-0 --grace-period=0 --force

Comment 3 Erik Nelson 2021-04-08 02:57:00 UTC
Closing as stale, please re-open if this is determined to be within scope of MTC.


Note You need to log in before you can comment on or make changes to this bug.