Bug 1323683 - LDAP setup using Ansible
Summary: LDAP setup using Ansible
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.1.0
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: ---
: ---
Assignee: Scott Dodson
QA Contact: Xiaoli Tian
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-04 12:30 UTC by Alexander Koksharov
Modified: 2019-10-10 11:46 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-01-31 15:31:33 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Alexander Koksharov 2016-04-04 12:30:43 UTC
Description of problem:
When we reinstalled our cluster we tried to have Ansible automatically configure the LDAP integration.  We did so by adding the line like following to our /etc/ansible/hosts file:
openshift_master_identity_providers=[{'name': 'Active_Directory', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': 'CN=******', 'bindPassword': '**********', 'ca': 'certificate.cer', 'insecure': 'false', 'url': 'ldaps://*******'}]

As you can see above we have specified a certificate, certificate.cer.  Problem is of course that at the time that Ansible runs the playbook that certificate is not present under /etc/origin/master/ and our playbook run fails as a result of that.  We can also not really put the certificate in place before running the Ansible playbook as at that time /etc/origin is not yet present on the file system.

Any suggestions on how to resolve this issue?

Some additional information, we are setting up a fully HA system (I will attach the /etc/ansible/hosts to this issue):
- 2 masters
- 3 etcd nodes
- 1 node

Additionally we noticed that when we put the certificate in place on both masters after the initial failed Ansible run the installation process is broken.  When we run the playbook again the atomic-openshift-master-api.service is unable to start.  When we uninstall Openshift using the provided playbook, remove the line in regards to the LDAP configuration and run the installation playbook again everything works (obviously we need to configure the LDAP integration manually afterwards).  Of course this is no longer an issue when we find a way to resolve the above issue of the missing certificate.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
Looks like cert file is not being copied on to the masters nodes. Values defined in hosts file go to master-config.yaml unchanged.
test 1
- I stored certificate in root directory /rh-ldap-ca.crt
- hosts file has: 'ca': '/rh-ldap-ca,crt'
- ran ansible playbook. and it finished without problems. 
- resulted master-config.yaml file had "ca: /rh-ldap-ca.crt" 
test 2
- certificate stored in rood directory
- hosts file has: 'ca': 'rh-ldap-ca,crt'
- ansible fails 
- generated  master-config.yaml had "ca: rh-ldap-ca.crt" 

So, before running ansible playbook it is necessary to have ldap certificate stored on all masters at any place accessible by openshift master process.

Comment 1 Brenton Leanhardt 2016-04-04 15:29:36 UTC
There's no reason that the certificate has to be under /etc/origin if you a running an RPM installation.  So I would simply place the file somewhere else in that case.  Somewhere under /etc/pki/CA/ might be appropriate.

Containerized installs are a little different though.  The only reason the running Master container can read a certificate is because we're mounting in /etc/origin.  Docker is creating that directory when the container runs the first time.   There's no harm in creating /etc/origin/master/certificate.cer prior to running the installation.

Comment 3 Scott Dodson 2019-01-31 15:31:33 UTC
There appear to be no active cases related to this bug. As such we're closing this bug in order to focus on bugs that are still tied to active customer cases. Please re-open this bug if you feel it was closed in error or a new active case is attached.


Note You need to log in before you can comment on or make changes to this bug.