Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1852312

Summary: authentication pods goes panic after customizing oauth templates
Product: OpenShift Container Platform Reporter: Yadan Pei <yapei>
Component: apiserver-authAssignee: Standa Laznicka <slaznick>
Status: CLOSED ERRATA QA Contact: pmali
Severity: medium Docs Contact:
Priority: medium    
Version: 4.6CC: aos-bugs, mfojtik, pasik, slaznick, yapei
Target Milestone: ---   
Target Release: 4.6.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-10-27 16:10:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Yadan Pei 2020-06-30 06:17:00 UTC
Description of problem:
after adding customized login, providers and error templates, authentication pods failed to start

Version-Release number of selected component (if applicable):
4.6.0-0.nightly-2020-06-26-035408

How reproducible:
Always

Steps to Reproduce:
1. Create template files
# oc adm create-login-template > login.html
# oc adm create-provider-selection-template > providers.html
# oc adm create-error-template > errors.html

2. Create secret using template files
# oc create secret generic login-template --from-file=/root/yapei/oauth-files/login.html -n openshift-config
secret/login-template created
# oc create secret generic error-template --from-file=/root/yapei/oauth-files/errors.html -n openshift-config
secret/error-template created
# oc create secret generic providers-template --from-file=/root/yapei/oauth-files/providers.html -n openshift-config
secret/providers-template created

3. Edit oauth cluster, add templates
# oc get oauth cluster -o json | jq '.spec'
{
  "identityProviders": [
    {
      "htpasswd": {
        "fileData": {
          "name": "htpass-secret"
        }
      },
      "mappingMethod": "claim",
      "name": "flexy-htpasswd-provider",
      "type": "HTPasswd"
    },
    {
      "htpasswd": {
        "fileData": {
          "name": "htpasswd-rjbmr"
        }
      },
      "mappingMethod": "claim",
      "name": "qehtpasswd",
      "type": "HTPasswd"
    }
  ],
  "templates": {
    "error": {
      "name": "error-template"
    },
    "login": {
      "name": "login-template"
    },
    "providerSelection": {
      "name": "providers-template"
    }
  }
}

Actual results:
3. new authentication pods is in CrashLoopBackOff status, and there is runtime error
# oc get pods -n openshift-authentication
NAME                               READY   STATUS             RESTARTS   AGE
oauth-openshift-77f7975575-wlzfk   0/1     CrashLoopBackOff   2          52s
oauth-openshift-7f4cdc57cb-7qtvf   1/1     Running            0          7m41s
oauth-openshift-7f4cdc57cb-lcr54   1/1     Running            0          7m54s

# oc logs -f oauth-openshift-77f7975575-wlzfk -n openshift-authentication
Copying system trust bundle
I0630 05:50:13.521096       1 dynamic_serving_content.go:111] Loaded a new cert/key pair for "serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key"
I0630 05:50:13.521435       1 dynamic_serving_content.go:111] Loaded a new cert/key pair for "sni-serving-cert::/var/config/system/secrets/v4-0-config-system-router-certs/apps.qe-ui46-0630.qe.devcluster.openshift.com::/var/config/system/secrets/v4-0-config-system-router-certs/apps.qe-ui46-0630.qe.devcluster.openshift.com"
panic: open /var/config/user/template/secret/v4-0-config-user-template-error/errors.html: no such file or directory

goroutine 1 [running]:
github.com/openshift/oauth-server/pkg/oauthserver.(*OAuthServerConfig).buildHandlerChainForOAuth(0xc000463c80, 0x1f67660, 0xc00040ca00, 0xc000445d40, 0x1977480, 0xc00095b4f8)
	github.com/openshift/oauth-server@/pkg/oauthserver/oauth_apiserver.go:307 +0xee
k8s.io/apiserver/pkg/server.completedConfig.New.func1(0x1f67660, 0xc00040ca00, 0x1f67660, 0xc00040ca00)
	k8s.io/apiserver.2/pkg/server/config.go:535 +0x45
k8s.io/apiserver/pkg/server.NewAPIServerHandler(0x1c1a852, 0xf, 0x1fab520, 0xc00067fc20, 0xc00095b710, 0x0, 0x0, 0xc00040c980)
	k8s.io/apiserver.2/pkg/server/handler.go:96 +0x2cc
k8s.io/apiserver/pkg/server.completedConfig.New(0xc000445d40, 0x0, 0x0, 0x1c1a852, 0xf, 0x1fc7d20, 0x2dd89a8, 0xc000445d40, 0x0, 0x0)
	k8s.io/apiserver.2/pkg/server/config.go:537 +0x124
github.com/openshift/oauth-server/pkg/oauthserver.completedOAuthConfig.New(0xc00040c980, 0xc000463c88, 0x1fc7d20, 0x2dd89a8, 0x4, 0x1fa9320, 0xc000580190)
	github.com/openshift/oauth-server@/pkg/oauthserver/oauth_apiserver.go:290 +0x70
github.com/openshift/oauth-server/pkg/cmd/oauth-server.RunOsinServer(0xc000496300, 0xc000236360, 0xc1c, 0xe1c)
	github.com/openshift/oauth-server@/pkg/cmd/oauth-server/server.go:41 +0x89
github.com/openshift/oauth-server/pkg/cmd/oauth-server.(*OsinServer).RunOsinServer(0xc0003aa890, 0xc000236360, 0xc0006cbaa0, 0x5eb040)
	github.com/openshift/oauth-server@/pkg/cmd/oauth-server/cmd.go:91 +0x286
github.com/openshift/oauth-server/pkg/cmd/oauth-server.NewOsinServer.func1(0xc0000cd900, 0xc0003049e0, 0x0, 0x2)
	github.com/openshift/oauth-server@/pkg/cmd/oauth-server/cmd.go:39 +0xf2
github.com/spf13/cobra.(*Command).execute(0xc0000cd900, 0xc000304980, 0x2, 0x2, 0xc0000cd900, 0xc000304980)
	github.com/spf13/cobra.5/command.go:830 +0x2aa
github.com/spf13/cobra.(*Command).ExecuteC(0xc0000cd680, 0xc0000cd680, 0x0, 0x0)
	github.com/spf13/cobra.5/command.go:914 +0x2fb
github.com/spf13/cobra.(*Command).Execute(...)
	github.com/spf13/cobra.5/command.go:864
main.main()
	github.com/openshift/oauth-server@/cmd/oauth-server/main.go:41 +0x302

Expected results:
3. new authentication pods should be started successfully

Additional info:

Comment 1 Yadan Pei 2020-06-30 06:52:33 UTC
in operator pod authentication-operator-7f6fb6765c-fpqk7 logs, found:

                "oauthConfig": map[string]interface{}{
                        "templates": map[string]interface{}{
                                "error":             string("/var/config/system/secrets/v4-0-config-system-ocp-branding-template/errors.html"),
-                               "login":             string("/var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html"),
+                               "login":             string("/var/config/user/template/secret/v4-0-config-user-template-login/login.html"),
-                               "providerSelection": string("/var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html"),
+                               "providerSelection": string("/var/config/user/template/secret/v4-0-config-user-template-provider-selection/providers.html"),
                        }
$ oc rsh -n openshift-authentication oauth-openshift-557f8fd675-jbn6z
sh-4.2# ls /var/config/system/secrets/v4-0-config-system-ocp-branding-template/
errors.html  login.html  providers.html


The path is /var/config/system/secrets/v4-0-config-system-ocp-branding-template/ but the pod tries to open /var/config/user/template/secret/v4-0-config-user-template-error/errors.html: no such file or directory

Comment 6 errata-xmlrpc 2020-10-27 16:10:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196