Bug 1290312 - Password for 'hacluster' user not set during installation with ansible
Password for 'hacluster' user not set during installation with ansible
Status: CLOSED DUPLICATE of bug 1288481
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer (Show other bugs)
3.1.0
Unspecified Linux
medium Severity medium
: ---
: ---
Assigned To: Andrew Butcher
Ma xiaoqiang
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-12-10 02:47 EST by Andrej Golis
Modified: 2016-07-03 20:45 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-01-05 09:34:55 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Andrej Golis 2015-12-10 02:47:54 EST
Description of problem:

TASK: [openshift_master_cluster | Authenticate to the cluster] **************** 
Thursday 10 December 2015  08:22:18 +0100 (0:00:00.023)       0:07:59.037 ***** 
failed: [host1.example.com] => {"changed": true, "cmd": ["pcs", "cluster", "auth", "-u", "hacluster", "-p", "Start123", "host1.example.com", "host2.example.com", "host3.example.com"], "delta": "0:00:02.255881", "end": "2015-12-10 07:22:21.394870", "rc": 1, "start": "2015-12-10 07:22:19.138989", "warnings": []}
stderr: Error: host3.example.com: Username and/or password is incorrect
Error: host2.example.com: Username and/or password is incorrect
Error: host1.example.com: Username and/or password is incorrect


How reproducible:

Apply openshift-ansible playbook - install OSE 3.1 following official docs using pacemaker clustering.

vars from inventory:

openshift_master_cluster_method: pacemaker
openshift_master_cluster_password: Start123
openshift_master_cluster_vip: 10.10.10.100
openshift_master_cluster_public_vip: 10.10.10.100
openshift_master_cluster_hostname: host-vip.example.com
openshift_master_cluster_public_hostname: host-vip.example.com

Actual results:

TASK: [openshift_master | Set the cluster user password] ********************** 
Thursday 10 December 2015  08:20:04 +0100 (0:00:00.782)       0:05:44.497 ***** 
skipping: [host1.example.com]

TASK: [openshift_master | Set the cluster user password] ********************** 
Thursday 10 December 2015  08:20:54 +0100 (0:00:00.769)       0:06:34.953 ***** 
skipping: [host2.example.com]

TASK: [openshift_master | Set the cluster user password] ********************** 
Thursday 10 December 2015  08:21:51 +0100 (0:00:00.799)       0:07:31.816 ***** 
skipping: [host3.example.com]

TASK: [openshift_master_cluster | Authenticate to the cluster] **************** 
Thursday 10 December 2015  08:22:18 +0100 (0:00:00.023)       0:07:59.037 ***** 
failed: [host1.example.com] => {"changed": true, "cmd": ["pcs", "cluster", "auth", "-u", "hacluster", "-p", "Start123", "host1.example.com", "host2.example.com", "host3.example.com"], "delta": "0:00:02.255881", "end": "2015-12-10 07:22:21.394870", "rc": 1, "start": "2015-12-10 07:22:19.138989", "warnings": []}
stderr: Error: host3.example.com: Username and/or password is incorrect
Error: host2.example.com: Username and/or password is incorrect
Error: host1.example.com: Username and/or password is incorrect

FATAL: all hosts have already failed -- aborting

[root@host2 ~]# grep hacluster /etc/{passwd,shadow*}
/etc/passwd:hacluster:x:189:189:cluster user:/home/hacluster:/sbin/nologin
/etc/shadow:hacluster:!!:16778::::::

Expected results:

Installation finishes successfully.

[root@host2 ~]# grep hacluster /etc/{passwd,shadow*}
/etc/passwd:hacluster:x:189:189:cluster user:/home/hacluster:/sbin/nologin
/etc/shadow:hacluster:$6$CVkfYhp/$Zmz8qWdaaPfNKHsfZYcdJrS1YVsVW1dhl.xNXC2jnVlsKhda3J6KfneHVaaZ55Yb4rgxPjJ9mOukud3GJzNUQ0:16779::::::

Additional info:

Installation continues after setting up the password for hacluster manually and running the playbook again.

Note You need to log in before you can comment on or make changes to this bug.