Bug 1404561

Summary: Problems finishing installation of OSE 3.3
Product: OpenShift Container Platform Reporter: Pavel Zagalsky <pzagalsk>
Component: MasterAssignee: Maciej Szulik <maszulik>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Chuan Yu <chuyu>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.3.0CC: aos-bugs, jforrest, jokerman, mmccomas, pzagalsk, sdodson
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-02-16 16:28:12 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
OSE 3.3 Installation log none

Description Pavel Zagalsky 2016-12-14 06:18:55 UTC
Created attachment 1231447 [details]
OSE 3.3 Installation log

Description of problem:
I attempted installing OS 3.3 on RHEL 7.2 using the following MOJO:
https://mojo.redhat.com/docs/DOC-1060819
I wasn't able to finish an installation and got errors in attached log file



How reproducible:
Always

Steps to Reproduce:
1. Go through steps in https://mojo.redhat.com/docs/DOC-1060819

Actual results:
Errors when finishing Ansible installation and OpenShift Service starts and fails


Additional info:
The OS version is:
atomic-openshift-utils-3.3.54-1.git.0.61a1dee.el7.noarch
Please check the attached logs
Please ping me for the address and credentials of the machine if needed

Comment 1 Scott Dodson 2016-12-14 13:13:12 UTC
I asked Pavel to open this bug, his setup is pretty simple but for some reason it seems like the api server is failing to bootstrap. This is happening after clearing the contents of etcd data store and restarting.

Comment 2 Scott Dodson 2016-12-14 13:14:41 UTC
Dec 12 11:07:02 cmTeamMaster-01.rhq.lab.eng.bos.redhat.com atomic-openshift-master[12223]: F1212 11:07:02.601291   12223 master.go:156] Failed to get supported resources from server: [User "system:serviceaccount:openshift-infra:namespace-controller" cannot "get" on "/apis/apps/v1alpha1", User "system:serviceaccount:openshift-infra:namespace-controller" cannot "get" on "/apis/authentication.k8s.io/v1beta1", User "system:serviceaccount:openshift-infra:namespace-controller" cannot "get" on "/apis/autoscaling/v1", User "system:serviceaccount:openshift-infra:namespace-controller" cannot "get" on "/apis/batch/v1", User "system:serviceaccount:openshift-infra:namespace-controller" cannot "get" on "/apis/batch/v2alpha1"]

Is the fatal error from the logs.

Comment 3 Jordan Liggitt 2016-12-19 15:50:19 UTC
Are the controllers starting before the API server has started?

Comment 4 Scott Dodson 2016-12-19 15:52:18 UTC
In this environment they're not running as separate services, so they're operating in whatever order `openshift start master` starts them.

Comment 5 Maciej Szulik 2017-02-07 11:46:10 UTC
It looks like there's a problem with initial roles, that should not happen. See following log entries:

Dec 12 10:44:16 cmTeamMaster-01.rhq.lab.eng.bos.redhat.com atomic-openshift-master[30125]: E1212 10:44:16.613085   30125 ensure.go:261] Could not auto reconcile roles: role "system:discovery" is forbidden: user "system:openshift-master" cannot grant extra privileges:

Dec 12 10:44:16 cmTeamMaster-01.rhq.lab.eng.bos.redhat.com atomic-openshift-master[30125]: E1212 10:44:16.625796   30125 ensure.go:274] Could not auto reconcile role bindings: role "system:discovery" not found

The default policy is created here:

Dec 12 10:44:15 cmTeamMaster-01.rhq.lab.eng.bos.redhat.com atomic-openshift-master[30125]: I1212 10:44:15.245517   30125 ensure.go:224] No cluster policy found.  Creating bootstrap policy based on: /etc/origin/master/policy.json

Please verify the contents of this file if it's correct, it doesn't look like.

Comment 6 Maciej Szulik 2017-02-09 11:21:02 UTC
Pavel any news on this one?

Comment 7 Pavel Zagalsky 2017-02-16 16:18:24 UTC
We were able to finish installation in the end.
Not much to add from my side at the moment.