Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 1450585 Details for
Bug 1590507
Install fails - Control plane pods didn't come up: applying cgroup configuration for process caused "No such device or address"
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
ansible log with -vvv
ansible.log (text/plain), 1.35 MB, created by
Vikas Laad
on 2018-06-12 17:39:12 UTC
(
hide
)
Description:
ansible log with -vvv
Filename:
MIME Type:
Creator:
Vikas Laad
Created:
2018-06-12 17:39:12 UTC
Size:
1.35 MB
patch
obsolete
>2018-06-12 17:06:38,864 p=5860 u=root | ansible-playbook 2.4.4.0 > config file = /etc/ansible/ansible.cfg > configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] > ansible python module location = /usr/lib/python2.7/site-packages/ansible > executable location = /usr/bin/ansible-playbook > python version = 2.7.5 (default, May 30 2018, 12:39:41) [GCC 4.8.5 20150623 (Red Hat 4.8.5-34)] >2018-06-12 17:06:38,865 p=5860 u=root | Using /etc/ansible/ansible.cfg as config file >2018-06-12 17:06:38,878 p=5860 u=root | Parsed /root/inv inventory source with ini plugin >2018-06-12 17:06:39,373 p=5860 u=root | statically imported: /root/openshift-ansible/roles/rhel_subscribe/tasks/satellite.yml >2018-06-12 17:06:39,504 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_node/tasks/dnsmasq_install.yml >2018-06-12 17:06:39,508 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_node/tasks/dnsmasq/no-network-manager.yml >2018-06-12 17:06:39,510 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_node/tasks/dnsmasq/network-manager.yml >2018-06-12 17:06:39,514 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_node/tasks/dnsmasq.yml >2018-06-12 17:06:39,517 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_node/tasks/dnsmasq/network-manager.yml >2018-06-12 17:06:39,522 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_node/tasks/firewall.yml >2018-06-12 17:06:39,527 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_node/tasks/journald.yml >2018-06-12 17:06:39,531 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_node/tasks/install.yml >2018-06-12 17:06:39,536 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_node/tasks/registry_auth.yml >2018-06-12 17:06:39,542 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_node/tasks/config.yml >2018-06-12 17:06:39,545 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_node/tasks/systemd_units.yml >2018-06-12 17:06:39,550 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_node/tasks/node_system_container.yml >2018-06-12 17:06:39,556 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_node/tasks/config/configure-node-settings.yml >2018-06-12 17:06:39,560 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_node/tasks/config/configure-proxy-settings.yml >2018-06-12 17:06:39,565 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_node/tasks/aws.yml >2018-06-12 17:06:39,570 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_node/tasks/storage_plugins/nfs.yml >2018-06-12 17:06:39,575 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_node/tasks/storage_plugins/glusterfs.yml >2018-06-12 17:06:39,579 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_node/tasks/storage_plugins/ceph.yml >2018-06-12 17:06:39,583 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_node/tasks/storage_plugins/iscsi.yml >2018-06-12 17:06:39,624 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_node/tasks/install_rpms.yml >2018-06-12 17:06:39,627 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_node/tasks/aws.yml >2018-06-12 17:06:39,741 p=5860 u=root | statically imported: /root/openshift-ansible/roles/etcd/tasks/set_facts.yml >2018-06-12 17:06:39,746 p=5860 u=root | statically imported: /root/openshift-ansible/roles/etcd/tasks/firewall.yml >2018-06-12 17:06:39,765 p=5860 u=root | statically imported: /root/openshift-ansible/roles/etcd/tasks/set_facts.yml >2018-06-12 17:06:39,768 p=5860 u=root | statically imported: /root/openshift-ansible/roles/etcd/tasks/firewall.yml >2018-06-12 17:06:39,804 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_storage_nfs/tasks/firewall.yml >2018-06-12 17:06:39,831 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_loadbalancer/tasks/firewall.yml >2018-06-12 17:06:40,037 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_control_plane/tasks/firewall.yml >2018-06-12 17:06:40,042 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_control_plane/tasks/static_shim.yml >2018-06-12 17:06:40,047 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_control_plane/tasks/htpass_provider.yml >2018-06-12 17:06:40,053 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_control_plane/tasks/set_loopback_context.yml >2018-06-12 17:06:40,060 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_control_plane/tasks/static.yml >2018-06-12 17:06:40,121 p=5860 u=root | statically imported: /root/openshift-ansible/roles/nuage_master/tasks/firewall.yml >2018-06-12 17:06:40,191 p=5860 u=root | statically imported: /root/openshift-ansible/playbooks/openshift-master/private/tasks/enable_bootstrap.yml >2018-06-12 17:06:40,267 p=5860 u=root | statically imported: /root/openshift-ansible/playbooks/openshift-master/private/tasks/enable_bootstrap_config.yml >2018-06-12 17:06:40,398 p=5860 u=root | statically imported: /root/openshift-ansible/roles/cockpit/tasks/firewall.yml >2018-06-12 17:06:40,652 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_hosted/tasks/firewall.yml >2018-06-12 17:06:40,680 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_hosted/tasks/firewall.yml >2018-06-12 17:06:41,195 p=5860 u=root | PLAYBOOK: deploy_cluster.yml **************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:41,196 p=5860 u=root | 107 plays in openshift-ansible/playbooks/deploy_cluster.yml >2018-06-12 17:06:41,490 p=5860 u=root | PLAY [Initialization Checkpoint Start] ****************************************************************************************************************************************************************************************************** >2018-06-12 17:06:41,810 p=5860 u=root | META: ran handlers >2018-06-12 17:06:41,814 p=5860 u=root | TASK [Set install initialization 'In Progress'] ********************************************************************************************************************************************************************************************* >2018-06-12 17:06:41,815 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/main.yml:11 >2018-06-12 17:06:41,852 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_stats": { > "aggregate": true, > "data": { > "installer_phase_initialize": { > "playbook": "", > "start": "20180612170641Z", > "status": "In Progress", > "title": "Initialization" > } > }, > "per_host": false > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:41,853 p=5860 u=root | META: ran handlers >2018-06-12 17:06:41,853 p=5860 u=root | META: ran handlers >2018-06-12 17:06:41,857 p=5860 u=root | PLAY [Populate config host groups] ********************************************************************************************************************************************************************************************************** >2018-06-12 17:06:41,859 p=5860 u=root | META: ran handlers >2018-06-12 17:06:41,862 p=5860 u=root | TASK [Load group name mapping variables] **************************************************************************************************************************************************************************************************** >2018-06-12 17:06:41,862 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:7 >2018-06-12 17:06:41,880 p=5860 u=root | ok: [localhost] => { > "ansible_facts": { > "g_all_hosts": "{{ g_master_hosts | union(g_node_hosts) | union(g_etcd_hosts) | union(g_new_etcd_hosts) | union(g_lb_hosts) | union(g_nfs_hosts) | union(g_new_node_hosts)| union(g_new_master_hosts) | union(g_glusterfs_hosts) | union(g_glusterfs_registry_hosts) | default([]) }}", > "g_etcd_hosts": "{{ groups.etcd | default([]) }}", > "g_glusterfs_hosts": "{{ groups.glusterfs | default([]) }}", > "g_glusterfs_registry_hosts": "{{ groups.glusterfs_registry | default(g_glusterfs_hosts) }}", > "g_lb_hosts": "{{ groups.lb | default([]) }}", > "g_master_hosts": "{{ groups.masters | default([]) }}", > "g_new_etcd_hosts": "{{ groups.new_etcd | default([]) }}", > "g_new_master_hosts": "{{ groups.new_masters | default([]) }}", > "g_new_node_hosts": "{{ groups.new_nodes | default([]) }}", > "g_nfs_hosts": "{{ groups.nfs | default([]) }}", > "g_node_hosts": "{{ groups.nodes | default([]) }}" > }, > "ansible_included_var_files": [ > "/root/openshift-ansible/playbooks/init/vars/cluster_hosts.yml" > ], > "changed": false, > "failed": false >} >2018-06-12 17:06:41,885 p=5860 u=root | TASK [Evaluate groups - g_etcd_hosts or g_new_etcd_hosts required] ************************************************************************************************************************************************************************** >2018-06-12 17:06:41,885 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:10 >2018-06-12 17:06:41,901 p=5860 u=root | skipping: [localhost] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:41,904 p=5860 u=root | TASK [Evaluate groups - g_master_hosts or g_new_master_hosts required] ********************************************************************************************************************************************************************** >2018-06-12 17:06:41,904 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:15 >2018-06-12 17:06:41,920 p=5860 u=root | skipping: [localhost] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:41,923 p=5860 u=root | TASK [Evaluate groups - g_node_hosts or g_new_node_hosts required] ************************************************************************************************************************************************************************** >2018-06-12 17:06:41,923 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:20 >2018-06-12 17:06:41,938 p=5860 u=root | skipping: [localhost] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:41,941 p=5860 u=root | TASK [Evaluate groups - g_lb_hosts required] ************************************************************************************************************************************************************************************************ >2018-06-12 17:06:41,941 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:25 >2018-06-12 17:06:41,956 p=5860 u=root | skipping: [localhost] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:41,959 p=5860 u=root | TASK [Evaluate groups - g_nfs_hosts required] *********************************************************************************************************************************************************************************************** >2018-06-12 17:06:41,959 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:30 >2018-06-12 17:06:41,975 p=5860 u=root | skipping: [localhost] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:41,978 p=5860 u=root | TASK [Evaluate groups - g_nfs_hosts is single host] ***************************************************************************************************************************************************************************************** >2018-06-12 17:06:41,978 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:35 >2018-06-12 17:06:41,993 p=5860 u=root | skipping: [localhost] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:41,997 p=5860 u=root | TASK [Evaluate groups - g_glusterfs_hosts required] ***************************************************************************************************************************************************************************************** >2018-06-12 17:06:41,997 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:40 >2018-06-12 17:06:42,010 p=5860 u=root | skipping: [localhost] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:42,014 p=5860 u=root | TASK [Evaluate oo_all_hosts] **************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:42,014 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:45 >2018-06-12 17:06:42,050 p=5860 u=root | creating host via 'add_host': hostname=ec2-54-186-168-249.us-west-2.compute.amazonaws.com >2018-06-12 17:06:42,054 p=5860 u=root | ok: [localhost] => (item=ec2-54-186-168-249.us-west-2.compute.amazonaws.com) => { > "add_host": { > "groups": [ > "oo_all_hosts" > ], > "host_name": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "host_vars": {} > }, > "changed": false, > "failed": false, > "item": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com" >} >2018-06-12 17:06:42,057 p=5860 u=root | creating host via 'add_host': hostname=ec2-34-210-25-239.us-west-2.compute.amazonaws.com >2018-06-12 17:06:42,059 p=5860 u=root | ok: [localhost] => (item=ec2-34-210-25-239.us-west-2.compute.amazonaws.com) => { > "add_host": { > "groups": [ > "oo_all_hosts" > ], > "host_name": "ec2-34-210-25-239.us-west-2.compute.amazonaws.com", > "host_vars": {} > }, > "changed": false, > "failed": false, > "item": "ec2-34-210-25-239.us-west-2.compute.amazonaws.com" >} >2018-06-12 17:06:42,063 p=5860 u=root | creating host via 'add_host': hostname=ec2-34-220-195-16.us-west-2.compute.amazonaws.com >2018-06-12 17:06:42,065 p=5860 u=root | ok: [localhost] => (item=ec2-34-220-195-16.us-west-2.compute.amazonaws.com) => { > "add_host": { > "groups": [ > "oo_all_hosts" > ], > "host_name": "ec2-34-220-195-16.us-west-2.compute.amazonaws.com", > "host_vars": {} > }, > "changed": false, > "failed": false, > "item": "ec2-34-220-195-16.us-west-2.compute.amazonaws.com" >} >2018-06-12 17:06:42,070 p=5860 u=root | TASK [Evaluate oo_masters] ****************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:42,070 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:54 >2018-06-12 17:06:42,092 p=5860 u=root | creating host via 'add_host': hostname=ec2-54-186-168-249.us-west-2.compute.amazonaws.com >2018-06-12 17:06:42,095 p=5860 u=root | ok: [localhost] => (item=ec2-54-186-168-249.us-west-2.compute.amazonaws.com) => { > "add_host": { > "groups": [ > "oo_masters" > ], > "host_name": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "host_vars": {} > }, > "changed": false, > "failed": false, > "item": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com" >} >2018-06-12 17:06:42,099 p=5860 u=root | TASK [Evaluate oo_first_master] ************************************************************************************************************************************************************************************************************* >2018-06-12 17:06:42,099 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:63 >2018-06-12 17:06:42,119 p=5860 u=root | creating host via 'add_host': hostname=ec2-54-186-168-249.us-west-2.compute.amazonaws.com >2018-06-12 17:06:42,122 p=5860 u=root | ok: [localhost] => { > "add_host": { > "groups": [ > "oo_first_master" > ], > "host_name": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "host_vars": {} > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:42,125 p=5860 u=root | TASK [Evaluate oo_new_etcd_to_config] ******************************************************************************************************************************************************************************************************* >2018-06-12 17:06:42,125 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:72 >2018-06-12 17:06:42,140 p=5860 u=root | TASK [Evaluate oo_masters_to_config] ******************************************************************************************************************************************************************************************************** >2018-06-12 17:06:42,141 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:81 >2018-06-12 17:06:42,163 p=5860 u=root | creating host via 'add_host': hostname=ec2-54-186-168-249.us-west-2.compute.amazonaws.com >2018-06-12 17:06:42,165 p=5860 u=root | ok: [localhost] => (item=ec2-54-186-168-249.us-west-2.compute.amazonaws.com) => { > "add_host": { > "groups": [ > "oo_masters_to_config" > ], > "host_name": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "host_vars": {} > }, > "changed": false, > "failed": false, > "item": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com" >} >2018-06-12 17:06:42,169 p=5860 u=root | TASK [Evaluate oo_etcd_to_config] *********************************************************************************************************************************************************************************************************** >2018-06-12 17:06:42,169 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:90 >2018-06-12 17:06:42,189 p=5860 u=root | creating host via 'add_host': hostname=ec2-54-186-168-249.us-west-2.compute.amazonaws.com >2018-06-12 17:06:42,192 p=5860 u=root | ok: [localhost] => (item=ec2-54-186-168-249.us-west-2.compute.amazonaws.com) => { > "add_host": { > "groups": [ > "oo_etcd_to_config" > ], > "host_name": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "host_vars": {} > }, > "changed": false, > "failed": false, > "item": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com" >} >2018-06-12 17:06:42,196 p=5860 u=root | TASK [Evaluate oo_first_etcd] *************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:42,196 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:99 >2018-06-12 17:06:42,216 p=5860 u=root | creating host via 'add_host': hostname=ec2-54-186-168-249.us-west-2.compute.amazonaws.com >2018-06-12 17:06:42,219 p=5860 u=root | ok: [localhost] => { > "add_host": { > "groups": [ > "oo_first_etcd" > ], > "host_name": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "host_vars": {} > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:42,222 p=5860 u=root | TASK [Evaluate oo_etcd_hosts_to_upgrade] **************************************************************************************************************************************************************************************************** >2018-06-12 17:06:42,222 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:111 >2018-06-12 17:06:42,240 p=5860 u=root | creating host via 'add_host': hostname=ec2-54-186-168-249.us-west-2.compute.amazonaws.com >2018-06-12 17:06:42,242 p=5860 u=root | ok: [localhost] => (item=ec2-54-186-168-249.us-west-2.compute.amazonaws.com) => { > "add_host": { > "groups": [ > "oo_etcd_hosts_to_upgrade" > ], > "host_name": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "host_vars": {} > }, > "changed": false, > "failed": false, > "item": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com" >} >2018-06-12 17:06:42,246 p=5860 u=root | TASK [Evaluate oo_etcd_hosts_to_backup] ***************************************************************************************************************************************************************************************************** >2018-06-12 17:06:42,247 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:118 >2018-06-12 17:06:42,265 p=5860 u=root | creating host via 'add_host': hostname=ec2-54-186-168-249.us-west-2.compute.amazonaws.com >2018-06-12 17:06:42,267 p=5860 u=root | ok: [localhost] => (item=ec2-54-186-168-249.us-west-2.compute.amazonaws.com) => { > "add_host": { > "groups": [ > "oo_etcd_hosts_to_backup" > ], > "host_name": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "host_vars": {} > }, > "changed": false, > "failed": false, > "item": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com" >} >2018-06-12 17:06:42,271 p=5860 u=root | TASK [Evaluate oo_nodes_to_config] ********************************************************************************************************************************************************************************************************** >2018-06-12 17:06:42,271 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:125 >2018-06-12 17:06:42,293 p=5860 u=root | creating host via 'add_host': hostname=ec2-54-186-168-249.us-west-2.compute.amazonaws.com >2018-06-12 17:06:42,296 p=5860 u=root | ok: [localhost] => (item=ec2-54-186-168-249.us-west-2.compute.amazonaws.com) => { > "add_host": { > "groups": [ > "oo_nodes_to_config" > ], > "host_name": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "host_vars": {} > }, > "changed": false, > "failed": false, > "item": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com" >} >2018-06-12 17:06:42,300 p=5860 u=root | creating host via 'add_host': hostname=ec2-34-210-25-239.us-west-2.compute.amazonaws.com >2018-06-12 17:06:42,301 p=5860 u=root | ok: [localhost] => (item=ec2-34-210-25-239.us-west-2.compute.amazonaws.com) => { > "add_host": { > "groups": [ > "oo_nodes_to_config" > ], > "host_name": "ec2-34-210-25-239.us-west-2.compute.amazonaws.com", > "host_vars": {} > }, > "changed": false, > "failed": false, > "item": "ec2-34-210-25-239.us-west-2.compute.amazonaws.com" >} >2018-06-12 17:06:42,306 p=5860 u=root | creating host via 'add_host': hostname=ec2-34-220-195-16.us-west-2.compute.amazonaws.com >2018-06-12 17:06:42,307 p=5860 u=root | ok: [localhost] => (item=ec2-34-220-195-16.us-west-2.compute.amazonaws.com) => { > "add_host": { > "groups": [ > "oo_nodes_to_config" > ], > "host_name": "ec2-34-220-195-16.us-west-2.compute.amazonaws.com", > "host_vars": {} > }, > "changed": false, > "failed": false, > "item": "ec2-34-220-195-16.us-west-2.compute.amazonaws.com" >} >2018-06-12 17:06:42,312 p=5860 u=root | TASK [Evaluate oo_nodes_to_bootstrap] ******************************************************************************************************************************************************************************************************* >2018-06-12 17:06:42,312 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:134 >2018-06-12 17:06:42,334 p=5860 u=root | creating host via 'add_host': hostname=ec2-54-186-168-249.us-west-2.compute.amazonaws.com >2018-06-12 17:06:42,338 p=5860 u=root | ok: [localhost] => (item=ec2-54-186-168-249.us-west-2.compute.amazonaws.com) => { > "add_host": { > "groups": [ > "oo_nodes_to_bootstrap" > ], > "host_name": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "host_vars": {} > }, > "changed": false, > "failed": false, > "item": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com" >} >2018-06-12 17:06:42,341 p=5860 u=root | creating host via 'add_host': hostname=ec2-34-210-25-239.us-west-2.compute.amazonaws.com >2018-06-12 17:06:42,343 p=5860 u=root | ok: [localhost] => (item=ec2-34-210-25-239.us-west-2.compute.amazonaws.com) => { > "add_host": { > "groups": [ > "oo_nodes_to_bootstrap" > ], > "host_name": "ec2-34-210-25-239.us-west-2.compute.amazonaws.com", > "host_vars": {} > }, > "changed": false, > "failed": false, > "item": "ec2-34-210-25-239.us-west-2.compute.amazonaws.com" >} >2018-06-12 17:06:42,347 p=5860 u=root | creating host via 'add_host': hostname=ec2-34-220-195-16.us-west-2.compute.amazonaws.com >2018-06-12 17:06:42,349 p=5860 u=root | ok: [localhost] => (item=ec2-34-220-195-16.us-west-2.compute.amazonaws.com) => { > "add_host": { > "groups": [ > "oo_nodes_to_bootstrap" > ], > "host_name": "ec2-34-220-195-16.us-west-2.compute.amazonaws.com", > "host_vars": {} > }, > "changed": false, > "failed": false, > "item": "ec2-34-220-195-16.us-west-2.compute.amazonaws.com" >} >2018-06-12 17:06:42,353 p=5860 u=root | TASK [Add masters to oo_nodes_to_bootstrap] ************************************************************************************************************************************************************************************************* >2018-06-12 17:06:42,354 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:143 >2018-06-12 17:06:42,373 p=5860 u=root | creating host via 'add_host': hostname=ec2-54-186-168-249.us-west-2.compute.amazonaws.com >2018-06-12 17:06:42,375 p=5860 u=root | ok: [localhost] => (item=ec2-54-186-168-249.us-west-2.compute.amazonaws.com) => { > "add_host": { > "groups": [ > "oo_nodes_to_bootstrap" > ], > "host_name": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "host_vars": {} > }, > "changed": false, > "failed": false, > "item": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com" >} >2018-06-12 17:06:42,379 p=5860 u=root | TASK [Evaluate oo_lb_to_config] ************************************************************************************************************************************************************************************************************* >2018-06-12 17:06:42,379 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:152 >2018-06-12 17:06:42,395 p=5860 u=root | TASK [Evaluate oo_nfs_to_config] ************************************************************************************************************************************************************************************************************ >2018-06-12 17:06:42,395 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:161 >2018-06-12 17:06:42,410 p=5860 u=root | TASK [Evaluate oo_glusterfs_to_config] ****************************************************************************************************************************************************************************************************** >2018-06-12 17:06:42,411 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:170 >2018-06-12 17:06:42,429 p=5860 u=root | TASK [Evaluate oo_etcd_to_migrate] ********************************************************************************************************************************************************************************************************** >2018-06-12 17:06:42,429 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/evaluate_groups.yml:179 >2018-06-12 17:06:42,449 p=5860 u=root | creating host via 'add_host': hostname=ec2-54-186-168-249.us-west-2.compute.amazonaws.com >2018-06-12 17:06:42,451 p=5860 u=root | ok: [localhost] => (item=ec2-54-186-168-249.us-west-2.compute.amazonaws.com) => { > "add_host": { > "groups": [ > "oo_etcd_to_migrate" > ], > "host_name": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "host_vars": {} > }, > "changed": false, > "failed": false, > "item": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com" >} >2018-06-12 17:06:42,453 p=5860 u=root | META: ran handlers >2018-06-12 17:06:42,453 p=5860 u=root | META: ran handlers >2018-06-12 17:06:42,456 p=5860 u=root | [WARNING]: Could not match supplied host pattern, ignoring: oo_lb_to_config > >2018-06-12 17:06:42,457 p=5860 u=root | [WARNING]: Could not match supplied host pattern, ignoring: oo_nfs_to_config > >2018-06-12 17:06:42,458 p=5860 u=root | PLAY [Ensure that all non-node hosts are accessible] **************************************************************************************************************************************************************************************** >2018-06-12 17:06:42,463 p=5860 u=root | TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:43,297 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:06:43,790 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] >2018-06-12 17:06:43,808 p=5860 u=root | META: ran handlers >2018-06-12 17:06:43,808 p=5860 u=root | META: ran handlers >2018-06-12 17:06:43,809 p=5860 u=root | META: ran handlers >2018-06-12 17:06:43,816 p=5860 u=root | PLAY [Initialize basic host facts] ********************************************************************************************************************************************************************************************************** >2018-06-12 17:06:43,824 p=5860 u=root | TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:43,852 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:06:43,858 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:06:43,873 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:06:44,255 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] >2018-06-12 17:06:44,480 p=5860 u=root | ok: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] >2018-06-12 17:06:44,502 p=5860 u=root | ok: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] >2018-06-12 17:06:44,519 p=5860 u=root | META: ran handlers >2018-06-12 17:06:44,525 p=5860 u=root | TASK [openshift_sanitize_inventory : include_tasks] ***************************************************************************************************************************************************************************************** >2018-06-12 17:06:44,525 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:4 >2018-06-12 17:06:44,582 p=5860 u=root | included: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/deprecations.yml for ec2-54-186-168-249.us-west-2.compute.amazonaws.com, ec2-34-210-25-239.us-west-2.compute.amazonaws.com, ec2-34-220-195-16.us-west-2.compute.amazonaws.com >2018-06-12 17:06:44,598 p=5860 u=root | TASK [openshift_sanitize_inventory : Check for usage of deprecated variables] *************************************************************************************************************************************************************** >2018-06-12 17:06:44,598 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/deprecations.yml:4 >2018-06-12 17:06:44,657 p=5860 u=root | ok: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", > "changed": false, > "failed": false >} >2018-06-12 17:06:44,664 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", > "changed": false, > "failed": false >} >2018-06-12 17:06:44,682 p=5860 u=root | ok: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", > "changed": false, > "failed": false >} >2018-06-12 17:06:44,689 p=5860 u=root | TASK [openshift_sanitize_inventory : debug] ************************************************************************************************************************************************************************************************* >2018-06-12 17:06:44,689 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/deprecations.yml:13 >2018-06-12 17:06:44,712 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "skip_reason": "Conditional result was False" >} >2018-06-12 17:06:44,713 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "skip_reason": "Conditional result was False" >} >2018-06-12 17:06:44,721 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "skip_reason": "Conditional result was False" >} >2018-06-12 17:06:44,727 p=5860 u=root | TASK [openshift_sanitize_inventory : set_stats] ********************************************************************************************************************************************************************************************* >2018-06-12 17:06:44,728 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/deprecations.yml:14 >2018-06-12 17:06:44,750 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:44,752 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:44,759 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:44,765 p=5860 u=root | TASK [openshift_sanitize_inventory : Assign deprecated variables to correct counterparts] *************************************************************************************************************************************************** >2018-06-12 17:06:44,766 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/deprecations.yml:22 >2018-06-12 17:06:44,837 p=5860 u=root | included: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/__deprecations_logging.yml for ec2-54-186-168-249.us-west-2.compute.amazonaws.com, ec2-34-210-25-239.us-west-2.compute.amazonaws.com, ec2-34-220-195-16.us-west-2.compute.amazonaws.com >2018-06-12 17:06:44,841 p=5860 u=root | included: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/__deprecations_metrics.yml for ec2-54-186-168-249.us-west-2.compute.amazonaws.com, ec2-34-210-25-239.us-west-2.compute.amazonaws.com, ec2-34-220-195-16.us-west-2.compute.amazonaws.com >2018-06-12 17:06:44,850 p=5860 u=root | TASK [openshift_sanitize_inventory : conditional_set_fact] ********************************************************************************************************************************************************************************** >2018-06-12 17:06:44,850 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/__deprecations_logging.yml:5 >2018-06-12 17:06:44,909 p=5860 u=root | ok: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "ansible_facts": {}, > "changed": false, > "failed": false >} >2018-06-12 17:06:44,913 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": {}, > "changed": false, > "failed": false >} >2018-06-12 17:06:44,931 p=5860 u=root | ok: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "ansible_facts": {}, > "changed": false, > "failed": false >} >2018-06-12 17:06:44,938 p=5860 u=root | TASK [openshift_sanitize_inventory : set_fact] ********************************************************************************************************************************************************************************************** >2018-06-12 17:06:44,938 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/__deprecations_logging.yml:42 >2018-06-12 17:06:44,979 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_logging_elasticsearch_ops_pvc_dynamic": "", > "openshift_logging_elasticsearch_ops_pvc_prefix": "", > "openshift_logging_elasticsearch_ops_pvc_size": "", > "openshift_logging_elasticsearch_pvc_dynamic": true, > "openshift_logging_elasticsearch_pvc_prefix": "logging-es", > "openshift_logging_elasticsearch_pvc_size": "10Gi" > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:44,986 p=5860 u=root | ok: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_logging_elasticsearch_ops_pvc_dynamic": "", > "openshift_logging_elasticsearch_ops_pvc_prefix": "", > "openshift_logging_elasticsearch_ops_pvc_size": "", > "openshift_logging_elasticsearch_pvc_dynamic": true, > "openshift_logging_elasticsearch_pvc_prefix": "logging-es", > "openshift_logging_elasticsearch_pvc_size": "10Gi" > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:45,001 p=5860 u=root | ok: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_logging_elasticsearch_ops_pvc_dynamic": "", > "openshift_logging_elasticsearch_ops_pvc_prefix": "", > "openshift_logging_elasticsearch_ops_pvc_size": "", > "openshift_logging_elasticsearch_pvc_dynamic": true, > "openshift_logging_elasticsearch_pvc_prefix": "logging-es", > "openshift_logging_elasticsearch_pvc_size": "10Gi" > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:45,008 p=5860 u=root | TASK [openshift_sanitize_inventory : conditional_set_fact] ********************************************************************************************************************************************************************************** >2018-06-12 17:06:45,008 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/__deprecations_metrics.yml:5 >2018-06-12 17:06:45,071 p=5860 u=root | ok: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "ansible_facts": {}, > "changed": false, > "failed": false >} >2018-06-12 17:06:45,072 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": {}, > "changed": false, > "failed": false >} >2018-06-12 17:06:45,089 p=5860 u=root | ok: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "ansible_facts": {}, > "changed": false, > "failed": false >} >2018-06-12 17:06:45,095 p=5860 u=root | TASK [openshift_sanitize_inventory : Standardize on latest variable names] ****************************************************************************************************************************************************************** >2018-06-12 17:06:45,095 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:7 >2018-06-12 17:06:45,125 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "deployment_subtype": "basic", > "openshift_deployment_subtype": "basic" > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:45,135 p=5860 u=root | ok: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "deployment_subtype": "basic", > "openshift_deployment_subtype": "basic" > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:45,145 p=5860 u=root | ok: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "deployment_subtype": "basic", > "openshift_deployment_subtype": "basic" > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:45,152 p=5860 u=root | TASK [openshift_sanitize_inventory : Normalize openshift_release] *************************************************************************************************************************************************************************** >2018-06-12 17:06:45,152 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:12 >2018-06-12 17:06:45,186 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_release": "3.10" > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:45,195 p=5860 u=root | ok: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_release": "3.10" > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:45,205 p=5860 u=root | ok: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_release": "3.10" > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:45,211 p=5860 u=root | TASK [openshift_sanitize_inventory : Abort when openshift_release is invalid] *************************************************************************************************************************************************************** >2018-06-12 17:06:45,212 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:22 >2018-06-12 17:06:45,235 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,238 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,246 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,253 p=5860 u=root | TASK [openshift_sanitize_inventory : include_tasks] ***************************************************************************************************************************************************************************************** >2018-06-12 17:06:45,253 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:31 >2018-06-12 17:06:45,308 p=5860 u=root | included: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml for ec2-54-186-168-249.us-west-2.compute.amazonaws.com, ec2-34-210-25-239.us-west-2.compute.amazonaws.com, ec2-34-220-195-16.us-west-2.compute.amazonaws.com >2018-06-12 17:06:45,325 p=5860 u=root | TASK [openshift_sanitize_inventory : Ensure that openshift_use_dnsmasq is true] ************************************************************************************************************************************************************* >2018-06-12 17:06:45,326 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml:5 >2018-06-12 17:06:45,350 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,350 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,358 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,365 p=5860 u=root | TASK [openshift_sanitize_inventory : Ensure that openshift_node_dnsmasq_install_network_manager_hook is true] ******************************************************************************************************************************* >2018-06-12 17:06:45,365 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml:14 >2018-06-12 17:06:45,388 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,389 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,397 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,404 p=5860 u=root | TASK [openshift_sanitize_inventory : set_fact] ********************************************************************************************************************************************************************************************** >2018-06-12 17:06:45,404 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml:22 >2018-06-12 17:06:45,454 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=openshift_hosted_registry_storage_kind) => { > "changed": false, > "item": "openshift_hosted_registry_storage_kind", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,470 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => (item=openshift_hosted_registry_storage_kind) => { > "changed": false, > "item": "openshift_hosted_registry_storage_kind", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,482 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=openshift_logging_storage_kind) => { > "ansible_facts": { > "__using_dynamic": true > }, > "ansible_facts_cacheable": false, > "changed": false, > "failed": false, > "item": "openshift_logging_storage_kind" >} >2018-06-12 17:06:45,485 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => (item=openshift_hosted_registry_storage_kind) => { > "changed": false, > "item": "openshift_hosted_registry_storage_kind", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,501 p=5860 u=root | ok: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => (item=openshift_logging_storage_kind) => { > "ansible_facts": { > "__using_dynamic": true > }, > "ansible_facts_cacheable": false, > "changed": false, > "failed": false, > "item": "openshift_logging_storage_kind" >} >2018-06-12 17:06:45,507 p=5860 u=root | ok: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => (item=openshift_logging_storage_kind) => { > "ansible_facts": { > "__using_dynamic": true > }, > "ansible_facts_cacheable": false, > "changed": false, > "failed": false, > "item": "openshift_logging_storage_kind" >} >2018-06-12 17:06:45,515 p=5860 u=root | TASK [openshift_sanitize_inventory : Ensure that dynamic provisioning is set if using dynamic storage] ************************************************************************************************************************************** >2018-06-12 17:06:45,515 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml:29 >2018-06-12 17:06:45,539 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,543 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,551 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,557 p=5860 u=root | TASK [openshift_sanitize_inventory : Ensure the hosted registry's GlusterFS storage is configured correctly] ******************************************************************************************************************************** >2018-06-12 17:06:45,557 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml:45 >2018-06-12 17:06:45,581 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,583 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,591 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,598 p=5860 u=root | TASK [openshift_sanitize_inventory : Ensure the hosted registry's GlusterFS storage is configured correctly] ******************************************************************************************************************************** >2018-06-12 17:06:45,599 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/unsupported.yml:58 >2018-06-12 17:06:45,622 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,623 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,633 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,640 p=5860 u=root | TASK [openshift_sanitize_inventory : Ensure clusterid is set along with the cloudprovider] ************************************************************************************************************************************************** >2018-06-12 17:06:45,640 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:35 >2018-06-12 17:06:45,664 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,666 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,678 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,684 p=5860 u=root | TASK [openshift_sanitize_inventory : Ensure ansible_service_broker_remove and ansible_service_broker_install are mutually exclusive] ******************************************************************************************************** >2018-06-12 17:06:45,684 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:48 >2018-06-12 17:06:45,707 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,708 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,716 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,723 p=5860 u=root | TASK [openshift_sanitize_inventory : Ensure template_service_broker_remove and template_service_broker_install are mutually exclusive] ****************************************************************************************************** >2018-06-12 17:06:45,723 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:57 >2018-06-12 17:06:45,746 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,749 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,760 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,767 p=5860 u=root | TASK [openshift_sanitize_inventory : Ensure that all requires vsphere configuration variables are set] ************************************************************************************************************************************** >2018-06-12 17:06:45,767 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:66 >2018-06-12 17:06:45,792 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,800 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,806 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,813 p=5860 u=root | TASK [openshift_sanitize_inventory : ensure provider configuration variables are defined] *************************************************************************************************************************************************** >2018-06-12 17:06:45,813 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:83 >2018-06-12 17:06:45,840 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,843 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,848 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,855 p=5860 u=root | TASK [openshift_sanitize_inventory : Ensure removed web console extension variables are not set] ******************************************************************************************************************************************** >2018-06-12 17:06:45,855 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:98 >2018-06-12 17:06:45,881 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,884 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,894 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,901 p=5860 u=root | TASK [openshift_sanitize_inventory : Ensure that web console port matches API server port] ************************************************************************************************************************************************** >2018-06-12 17:06:45,901 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:109 >2018-06-12 17:06:45,922 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,923 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,934 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:45,943 p=5860 u=root | TASK [openshift_sanitize_inventory : At least one master is schedulable] ******************************************************************************************************************************************************************** >2018-06-12 17:06:45,944 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_sanitize_inventory/tasks/main.yml:119 >2018-06-12 17:06:45,997 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:46,006 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:46,008 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:46,015 p=5860 u=root | TASK [Detecting Operating System from ostree_booted] **************************************************************************************************************************************************************************************** >2018-06-12 17:06:46,015 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/basic_facts.yml:19 >2018-06-12 17:06:46,203 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:06:46,203 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:06:46,203 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:06:46,412 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/run/ostree-booted" > } > }, > "stat": { > "exists": false > } >} >2018-06-12 17:06:46,418 p=5860 u=root | ok: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/run/ostree-booted" > } > }, > "stat": { > "exists": false > } >} >2018-06-12 17:06:46,423 p=5860 u=root | ok: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/run/ostree-booted" > } > }, > "stat": { > "exists": false > } >} >2018-06-12 17:06:46,430 p=5860 u=root | TASK [set openshift_deployment_type if unset] *********************************************************************************************************************************************************************************************** >2018-06-12 17:06:46,430 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/basic_facts.yml:25 >2018-06-12 17:06:46,453 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:46,454 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:46,463 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:46,469 p=5860 u=root | TASK [check for node already bootstrapped] ************************************************************************************************************************************************************************************************** >2018-06-12 17:06:46,469 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/basic_facts.yml:32 >2018-06-12 17:06:46,493 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:06:46,508 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:06:46,522 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:06:46,726 p=5860 u=root | ok: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/node/bootstrap-node-config.yaml" > } > }, > "stat": { > "atime": 1528820492.9835591, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "b74c893023c875b58c48dc45cfd1207218537a6c", > "ctime": 1528820492.9865592, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 83886614, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "63f1b709ed1aabcd879f2dc356e44df1", > "mimetype": "text/plain", > "mode": "0600", > "mtime": 1528820492.235541, > "nlink": 1, > "path": "/etc/origin/node/bootstrap-node-config.yaml", > "pw_name": "root", > "readable": true, > "rgrp": false, > "roth": false, > "rusr": true, > "size": 1524, > "uid": 0, > "version": "18446744072647733506", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:06:46,735 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/node/bootstrap-node-config.yaml" > } > }, > "stat": { > "atime": 1528820614.426227, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "b74c893023c875b58c48dc45cfd1207218537a6c", > "ctime": 1528820492.9662478, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 125829341, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "63f1b709ed1aabcd879f2dc356e44df1", > "mimetype": "text/plain", > "mode": "0600", > "mtime": 1528820492.2122483, > "nlink": 1, > "path": "/etc/origin/node/bootstrap-node-config.yaml", > "pw_name": "root", > "readable": true, > "rgrp": false, > "roth": false, > "rusr": true, > "size": 1524, > "uid": 0, > "version": "1260188634", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:06:46,738 p=5860 u=root | ok: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/node/bootstrap-node-config.yaml" > } > }, > "stat": { > "atime": 1528820493.078, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "b74c893023c875b58c48dc45cfd1207218537a6c", > "ctime": 1528820493.081, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 83886614, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "63f1b709ed1aabcd879f2dc356e44df1", > "mimetype": "text/plain", > "mode": "0600", > "mtime": 1528820492.3289933, > "nlink": 1, > "path": "/etc/origin/node/bootstrap-node-config.yaml", > "pw_name": "root", > "readable": true, > "rgrp": false, > "roth": false, > "rusr": true, > "size": 1524, > "uid": 0, > "version": "553225561", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:06:46,745 p=5860 u=root | TASK [initialize_facts set fact openshift_is_bootstrapped] ********************************************************************************************************************************************************************************** >2018-06-12 17:06:46,745 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/basic_facts.yml:36 >2018-06-12 17:06:46,776 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_is_bootstrapped": true > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:46,784 p=5860 u=root | ok: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_is_bootstrapped": true > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:46,794 p=5860 u=root | ok: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_is_bootstrapped": true > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:46,800 p=5860 u=root | TASK [initialize_facts set fact openshift_is_atomic and openshift_is_containerized] ********************************************************************************************************************************************************* >2018-06-12 17:06:46,801 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/basic_facts.yml:40 >2018-06-12 17:06:46,832 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_is_atomic": false, > "openshift_is_containerized": false > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:46,840 p=5860 u=root | ok: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_is_atomic": false, > "openshift_is_containerized": false > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:46,854 p=5860 u=root | ok: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_is_atomic": false, > "openshift_is_containerized": false > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:46,860 p=5860 u=root | TASK [Determine Atomic Host Docker Version] ************************************************************************************************************************************************************************************************* >2018-06-12 17:06:46,860 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/basic_facts.yml:57 >2018-06-12 17:06:46,884 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:46,885 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:46,893 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:46,900 p=5860 u=root | TASK [assert atomic host docker version is 1.12 or later] *********************************************************************************************************************************************************************************** >2018-06-12 17:06:46,900 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/basic_facts.yml:61 >2018-06-12 17:06:46,923 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:46,924 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:46,932 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:46,933 p=5860 u=root | META: ran handlers >2018-06-12 17:06:46,933 p=5860 u=root | META: ran handlers >2018-06-12 17:06:46,938 p=5860 u=root | PLAY [Retrieve existing master configs and validate] **************************************************************************************************************************************************************************************** >2018-06-12 17:06:46,939 p=5860 u=root | META: ran handlers >2018-06-12 17:06:46,945 p=5860 u=root | TASK [openshift_control_plane : stat] ******************************************************************************************************************************************************************************************************* >2018-06-12 17:06:46,945 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/check_existing_config.yml:3 >2018-06-12 17:06:46,971 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:06:47,183 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/master-config.yaml" > } > }, > "stat": { > "atime": 1528820650.8972127, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 16, > "charset": "us-ascii", > "checksum": "f0b9bfb7ff0583c7acb5cd7bbb4c22f8b3568c1f", > "ctime": 1528820650.8972127, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 159383772, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "d0bba32d29ec1b4c3c110b683d4b0ad0", > "mimetype": "text/plain", > "mode": "0644", > "mtime": 1528820650.8972127, > "nlink": 1, > "path": "/etc/origin/master/master-config.yaml", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 6018, > "uid": 0, > "version": "172906050", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:06:47,190 p=5860 u=root | TASK [openshift_control_plane : slurp] ****************************************************************************************************************************************************************************************************** >2018-06-12 17:06:47,191 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/check_existing_config.yml:7 >2018-06-12 17:06:47,366 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/net_tools/basics/slurp.py >2018-06-12 17:06:47,561 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "content": "admissionConfig:
  pluginConfig:
    BuildDefaults:
      configuration:
        apiVersion: v1
        env: []
        kind: BuildDefaultsConfig
        resources:
          limits: {}
          requests: {}
    BuildOverrides:
      configuration:
        apiVersion: v1
        kind: BuildOverridesConfig
    openshift.io/ImagePolicy:
      configuration:
        apiVersion: v1
        executionRules:
        - matchImageAnnotations:
          - key: images.openshift.io/deny-execution
            value: 'true'
          name: execution-denied
          onResources:
          - resource: pods
          - resource: builds
          reject: true
          skipOnResolutionFailure: true
        kind: ImagePolicyConfig
aggregatorConfig:
  proxyClientInfo:
    certFile: aggregator-front-proxy.crt
    keyFile: aggregator-front-proxy.key
apiLevels:
- v1
apiVersion: v1
authConfig:
  requestHeader:
    clientCA: front-proxy-ca.crt
    clientCommonNames:
    - aggregator-front-proxy
    extraHeaderPrefixes:
    - X-Remote-Extra-
    groupHeaders:
    - X-Remote-Group
    usernameHeaders:
    - X-Remote-User
controllerConfig:
  election:
    lockName: openshift-master-controllers
  serviceServingCert:
    signer:
      certFile: service-signer.crt
      keyFile: service-signer.key
controllers: '*'
corsAllowedOrigins:
- (?i)//127\.0\.0\.1(:|\z)
- (?i)//localhost(:|\z)
- (?i)//172\.31\.50\.118(:|\z)
- (?i)//54\.186\.168\.249(:|\z)
- (?i)//kubernetes\.default(:|\z)
- (?i)//ec2\-54\-186\-168\-249\.us\-west\-2\.compute\.amazonaws\.com(:|\z)
- (?i)//kubernetes\.default\.svc\.cluster\.local(:|\z)
- (?i)//kubernetes(:|\z)
- (?i)//openshift\.default(:|\z)
- (?i)//openshift\.default\.svc(:|\z)
- (?i)//openshift\.default\.svc\.cluster\.local(:|\z)
- (?i)//ip\-172\-31\-50\-118\.us\-west\-2\.compute\.internal(:|\z)
- (?i)//kubernetes\.default\.svc(:|\z)
- (?i)//openshift(:|\z)
- (?i)//172\.24\.0\.1(:|\z)
dnsConfig:
  bindAddress: 0.0.0.0:8053
  bindNetwork: tcp4
etcdClientInfo:
  ca: master.etcd-ca.crt
  certFile: master.etcd-client.crt
  keyFile: master.etcd-client.key
  urls:
  - https://ip-172-31-50-118.us-west-2.compute.internal:2379
etcdStorageConfig:
  kubernetesStoragePrefix: kubernetes.io
  kubernetesStorageVersion: v1
  openShiftStoragePrefix: openshift.io
  openShiftStorageVersion: v1
imageConfig:
  format: registry.reg-aws.openshift.com:443/openshift3/ose-${component}:${version}
  latest: false
imagePolicyConfig:
  internalRegistryHostname: docker-registry.default.svc:5000
kind: MasterConfig
kubeletClientInfo:
  ca: ca-bundle.crt
  certFile: master.kubelet-client.crt
  keyFile: master.kubelet-client.key
  port: 10250
kubernetesMasterConfig:
  apiServerArguments:
    cloud-config:
    - /etc/origin/cloudprovider/aws.conf
    cloud-provider:
    - aws
    storage-backend:
    - etcd3
    storage-media-type:
    - application/vnd.kubernetes.protobuf
  controllerArguments:
    cloud-config:
    - /etc/origin/cloudprovider/aws.conf
    cloud-provider:
    - aws
    cluster-signing-cert-file:
    - /etc/origin/master/ca.crt
    cluster-signing-key-file:
    - /etc/origin/master/ca.key
    disable-attach-detach-reconcile-sync:
    - 'true'
  masterCount: 1
  masterIP: 172.31.50.118
  podEvictionTimeout: null
  proxyClientInfo:
    certFile: master.proxy-client.crt
    keyFile: master.proxy-client.key
  schedulerArguments: null
  schedulerConfigFile: /etc/origin/master/scheduler.json
  servicesNodePortRange: ''
  servicesSubnet: 172.24.0.0/14
  staticNodeNames: []
masterClients:
  externalKubernetesClientConnectionOverrides:
    acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
    burst: 400
    contentType: application/vnd.kubernetes.protobuf
    qps: 200
  externalKubernetesKubeConfig: ''
  openshiftLoopbackClientConnectionOverrides:
    acceptContentTypes: application/vnd.kubernetes.protobuf,application/json
    burst: 600
    contentType: application/vnd.kubernetes.protobuf
    qps: 300
  openshiftLoopbackKubeConfig: openshift-master.kubeconfig
masterPublicURL: https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443
networkConfig:
  clusterNetworks:
  - cidr: 172.20.0.0/14
    hostSubnetLength: 9
  externalIPNetworkCIDRs:
  - 0.0.0.0/0
  networkPluginName: redhat/openshift-ovs-networkpolicy
  serviceNetworkCIDR: 172.24.0.0/14
oauthConfig:
  assetPublicURL: https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443/console/
  grantConfig:
    method: auto
  identityProviders:
  - challenge: true
    login: true
    mappingMethod: claim
    name: allow_all
    provider:
      apiVersion: v1
      kind: AllowAllPasswordIdentityProvider
  masterCA: ca-bundle.crt
  masterPublicURL: https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443
  masterURL: https://ip-172-31-50-118.us-west-2.compute.internal:8443
  sessionConfig:
    sessionMaxAgeSeconds: 3600
    sessionName: ssn
    sessionSecretsFile: /etc/origin/master/session-secrets.yaml
  tokenConfig:
    accessTokenMaxAgeSeconds: 86400
    authorizeTokenMaxAgeSeconds: 500
pauseControllers: false
policyConfig:
  bootstrapPolicyFile: /etc/origin/master/policy.json
  openshiftInfrastructureNamespace: openshift-infra
  openshiftSharedResourcesNamespace: openshift
projectConfig:
  defaultNodeSelector: node-role.kubernetes.io/compute=true
  projectRequestMessage: ''
  projectRequestTemplate: ''
  securityAllocator:
    mcsAllocatorRange: s0:/2
    mcsLabelsPerProject: 5
    uidAllocatorRange: 1000000000-1999999999/10000
routingConfig:
  subdomain: apps.0612-g-9.qe.rhcloud.com
serviceAccountConfig:
  limitSecretReferences: false
  managedNames:
  - default
  - builder
  - deployer
  masterCA: ca-bundle.crt
  privateKeyFile: serviceaccounts.private.key
  publicKeyFiles:
  - serviceaccounts.public.key
servingInfo:
  bindAddress: 0.0.0.0:8443
  bindNetwork: tcp4
  certFile: master.server.crt
  clientCA: ca.crt
  keyFile: master.server.key
  maxRequestsInFlight: 500
  requestTimeoutSeconds: 3600
volumeConfig:
  dynamicProvisioningEnabled: true
", > "encoding": "base64", > "failed": false, > "invocation": { > "module_args": { > "src": "/etc/origin/master/master-config.yaml" > } > }, > "source": "/etc/origin/master/master-config.yaml" >} >2018-06-12 17:06:47,569 p=5860 u=root | TASK [openshift_control_plane : set_fact] *************************************************************************************************************************************************************************************************** >2018-06-12 17:06:47,569 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/check_existing_config.yml:14 >2018-06-12 17:06:47,636 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "l_existing_config_master_config": { > "admissionConfig": { > "pluginConfig": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > }, > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > }, > "openshift.io/ImagePolicy": { > "configuration": { > "apiVersion": "v1", > "executionRules": [ > { > "matchImageAnnotations": [ > { > "key": "images.openshift.io/deny-execution", > "value": "true" > } > ], > "name": "execution-denied", > "onResources": [ > { > "resource": "pods" > }, > { > "resource": "builds" > } > ], > "reject": true, > "skipOnResolutionFailure": true > } > ], > "kind": "ImagePolicyConfig" > } > } > } > }, > "aggregatorConfig": { > "proxyClientInfo": { > "certFile": "aggregator-front-proxy.crt", > "keyFile": "aggregator-front-proxy.key" > } > }, > "apiLevels": [ > "v1" > ], > "apiVersion": "v1", > "authConfig": { > "requestHeader": { > "clientCA": "front-proxy-ca.crt", > "clientCommonNames": [ > "aggregator-front-proxy" > ], > "extraHeaderPrefixes": [ > "X-Remote-Extra-" > ], > "groupHeaders": [ > "X-Remote-Group" > ], > "usernameHeaders": [ > "X-Remote-User" > ] > } > }, > "controllerConfig": { > "election": { > "lockName": "openshift-master-controllers" > }, > "serviceServingCert": { > "signer": { > "certFile": "service-signer.crt", > "keyFile": "service-signer.key" > } > } > }, > "controllers": "*", > "corsAllowedOrigins": [ > "(?i)//127\\.0\\.0\\.1(:|\\z)", > "(?i)//localhost(:|\\z)", > "(?i)//172\\.31\\.50\\.118(:|\\z)", > "(?i)//54\\.186\\.168\\.249(:|\\z)", > "(?i)//kubernetes\\.default(:|\\z)", > "(?i)//ec2\\-54\\-186\\-168\\-249\\.us\\-west\\-2\\.compute\\.amazonaws\\.com(:|\\z)", > "(?i)//kubernetes\\.default\\.svc\\.cluster\\.local(:|\\z)", > "(?i)//kubernetes(:|\\z)", > "(?i)//openshift\\.default(:|\\z)", > "(?i)//openshift\\.default\\.svc(:|\\z)", > "(?i)//openshift\\.default\\.svc\\.cluster\\.local(:|\\z)", > "(?i)//ip\\-172\\-31\\-50\\-118\\.us\\-west\\-2\\.compute\\.internal(:|\\z)", > "(?i)//kubernetes\\.default\\.svc(:|\\z)", > "(?i)//openshift(:|\\z)", > "(?i)//172\\.24\\.0\\.1(:|\\z)" > ], > "dnsConfig": { > "bindAddress": "0.0.0.0:8053", > "bindNetwork": "tcp4" > }, > "etcdClientInfo": { > "ca": "master.etcd-ca.crt", > "certFile": "master.etcd-client.crt", > "keyFile": "master.etcd-client.key", > "urls": [ > "https://ip-172-31-50-118.us-west-2.compute.internal:2379" > ] > }, > "etcdStorageConfig": { > "kubernetesStoragePrefix": "kubernetes.io", > "kubernetesStorageVersion": "v1", > "openShiftStoragePrefix": "openshift.io", > "openShiftStorageVersion": "v1" > }, > "imageConfig": { > "format": "registry.reg-aws.openshift.com:443/openshift3/ose-${component}:${version}", > "latest": false > }, > "imagePolicyConfig": { > "internalRegistryHostname": "docker-registry.default.svc:5000" > }, > "kind": "MasterConfig", > "kubeletClientInfo": { > "ca": "ca-bundle.crt", > "certFile": "master.kubelet-client.crt", > "keyFile": "master.kubelet-client.key", > "port": 10250 > }, > "kubernetesMasterConfig": { > "apiServerArguments": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ], > "storage-backend": [ > "etcd3" > ], > "storage-media-type": [ > "application/vnd.kubernetes.protobuf" > ] > }, > "controllerArguments": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ], > "cluster-signing-cert-file": [ > "/etc/origin/master/ca.crt" > ], > "cluster-signing-key-file": [ > "/etc/origin/master/ca.key" > ], > "disable-attach-detach-reconcile-sync": [ > "true" > ] > }, > "masterCount": 1, > "masterIP": "172.31.50.118", > "podEvictionTimeout": null, > "proxyClientInfo": { > "certFile": "master.proxy-client.crt", > "keyFile": "master.proxy-client.key" > }, > "schedulerArguments": null, > "schedulerConfigFile": "/etc/origin/master/scheduler.json", > "servicesNodePortRange": "", > "servicesSubnet": "172.24.0.0/14", > "staticNodeNames": [] > }, > "masterClients": { > "externalKubernetesClientConnectionOverrides": { > "acceptContentTypes": "application/vnd.kubernetes.protobuf,application/json", > "burst": 400, > "contentType": "application/vnd.kubernetes.protobuf", > "qps": 200 > }, > "externalKubernetesKubeConfig": "", > "openshiftLoopbackClientConnectionOverrides": { > "acceptContentTypes": "application/vnd.kubernetes.protobuf,application/json", > "burst": 600, > "contentType": "application/vnd.kubernetes.protobuf", > "qps": 300 > }, > "openshiftLoopbackKubeConfig": "openshift-master.kubeconfig" > }, > "masterPublicURL": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443", > "networkConfig": { > "clusterNetworks": [ > { > "cidr": "172.20.0.0/14", > "hostSubnetLength": 9 > } > ], > "externalIPNetworkCIDRs": [ > "0.0.0.0/0" > ], > "networkPluginName": "redhat/openshift-ovs-networkpolicy", > "serviceNetworkCIDR": "172.24.0.0/14" > }, > "oauthConfig": { > "assetPublicURL": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443/console/", > "grantConfig": { > "method": "auto" > }, > "identityProviders": [ > { > "challenge": true, > "login": true, > "mappingMethod": "claim", > "name": "allow_all", > "provider": { > "apiVersion": "v1", > "kind": "AllowAllPasswordIdentityProvider" > } > } > ], > "masterCA": "ca-bundle.crt", > "masterPublicURL": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443", > "masterURL": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "sessionConfig": { > "sessionMaxAgeSeconds": 3600, > "sessionName": "ssn", > "sessionSecretsFile": "/etc/origin/master/session-secrets.yaml" > }, > "tokenConfig": { > "accessTokenMaxAgeSeconds": 86400, > "authorizeTokenMaxAgeSeconds": 500 > } > }, > "pauseControllers": false, > "policyConfig": { > "bootstrapPolicyFile": "/etc/origin/master/policy.json", > "openshiftInfrastructureNamespace": "openshift-infra", > "openshiftSharedResourcesNamespace": "openshift" > }, > "projectConfig": { > "defaultNodeSelector": "node-role.kubernetes.io/compute=true", > "projectRequestMessage": "", > "projectRequestTemplate": "", > "securityAllocator": { > "mcsAllocatorRange": "s0:/2", > "mcsLabelsPerProject": 5, > "uidAllocatorRange": "1000000000-1999999999/10000" > } > }, > "routingConfig": { > "subdomain": "apps.0612-g-9.qe.rhcloud.com" > }, > "serviceAccountConfig": { > "limitSecretReferences": false, > "managedNames": [ > "default", > "builder", > "deployer" > ], > "masterCA": "ca-bundle.crt", > "privateKeyFile": "serviceaccounts.private.key", > "publicKeyFiles": [ > "serviceaccounts.public.key" > ] > }, > "servingInfo": { > "bindAddress": "0.0.0.0:8443", > "bindNetwork": "tcp4", > "certFile": "master.server.crt", > "clientCA": "ca.crt", > "keyFile": "master.server.key", > "maxRequestsInFlight": 500, > "requestTimeoutSeconds": 3600 > }, > "volumeConfig": { > "dynamicProvisioningEnabled": true > } > } > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:47,649 p=5860 u=root | TASK [openshift_control_plane : Check for file paths outside of /etc/origin/master in master's config] ************************************************************************************************************************************** >2018-06-12 17:06:47,649 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/check_existing_config.yml:20 >2018-06-12 17:06:47,684 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "msg": "Aight, configs looking good" >} >2018-06-12 17:06:47,690 p=5860 u=root | TASK [openshift_control_plane : set_fact] *************************************************************************************************************************************************************************************************** >2018-06-12 17:06:47,690 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/check_existing_config.yml:25 >2018-06-12 17:06:47,725 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_master_existing_idproviders": [ > { > "challenge": true, > "login": true, > "mappingMethod": "claim", > "name": "allow_all", > "provider": { > "apiVersion": "v1", > "kind": "AllowAllPasswordIdentityProvider" > } > } > ] > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:47,732 p=5860 u=root | TASK [set_fact] ***************************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:47,732 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/basic_facts.yml:82 >2018-06-12 17:06:47,763 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_portal_net": "172.24.0.0/14" > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:47,769 p=5860 u=root | TASK [set_fact] ***************************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:47,769 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/basic_facts.yml:85 >2018-06-12 17:06:47,803 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "osm_cluster_network_cidr": "172.20.0.0/14", > "osm_host_subnet_length": "9" > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:47,809 p=5860 u=root | TASK [set_fact] ***************************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:47,809 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/basic_facts.yml:93 >2018-06-12 17:06:47,827 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:47,827 p=5860 u=root | META: ran handlers >2018-06-12 17:06:47,827 p=5860 u=root | META: ran handlers >2018-06-12 17:06:47,832 p=5860 u=root | PLAY [Initialize special first-master variables] ******************************************************************************************************************************************************************************************** >2018-06-12 17:06:47,839 p=5860 u=root | TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:47,863 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:06:48,224 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] >2018-06-12 17:06:48,242 p=5860 u=root | META: ran handlers >2018-06-12 17:06:48,248 p=5860 u=root | TASK [set_fact] ***************************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:48,248 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/basic_facts.yml:107 >2018-06-12 17:06:48,261 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,267 p=5860 u=root | TASK [set_fact] ***************************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:48,267 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/basic_facts.yml:116 >2018-06-12 17:06:48,296 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "first_master_client_binary": "oc", > "l_osm_default_node_selector": "node-role.kubernetes.io/compute=true", > "openshift_client_binary": "oc" > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:48,297 p=5860 u=root | META: ran handlers >2018-06-12 17:06:48,297 p=5860 u=root | META: ran handlers >2018-06-12 17:06:48,301 p=5860 u=root | PLAY [Disable web console if required] ****************************************************************************************************************************************************************************************************** >2018-06-12 17:06:48,303 p=5860 u=root | META: ran handlers >2018-06-12 17:06:48,308 p=5860 u=root | TASK [set_fact] ***************************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:48,308 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/basic_facts.yml:129 >2018-06-12 17:06:48,323 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,324 p=5860 u=root | META: ran handlers >2018-06-12 17:06:48,324 p=5860 u=root | META: ran handlers >2018-06-12 17:06:48,328 p=5860 u=root | PLAY [Setup yum repositories for all hosts] ************************************************************************************************************************************************************************************************* >2018-06-12 17:06:48,328 p=5860 u=root | skipping: no hosts matched >2018-06-12 17:06:48,334 p=5860 u=root | PLAY [Install packages necessary for installer] ********************************************************************************************************************************************************************************************* >2018-06-12 17:06:48,342 p=5860 u=root | TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:48,364 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,365 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,373 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,373 p=5860 u=root | META: ran handlers >2018-06-12 17:06:48,379 p=5860 u=root | TASK [Determine if chrony is installed] ***************************************************************************************************************************************************************************************************** >2018-06-12 17:06:48,379 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/base_packages.yml:9 >2018-06-12 17:06:48,402 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,403 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,411 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,418 p=5860 u=root | TASK [Install ntp package] ****************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:48,418 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/base_packages.yml:16 >2018-06-12 17:06:48,440 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,441 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,450 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,455 p=5860 u=root | TASK [Start and enable ntpd/chronyd] ******************************************************************************************************************************************************************************************************** >2018-06-12 17:06:48,456 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/base_packages.yml:24 >2018-06-12 17:06:48,479 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,480 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,487 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,492 p=5860 u=root | TASK [Ensure openshift-ansible installer package deps are installed] ************************************************************************************************************************************************************************ >2018-06-12 17:06:48,493 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/base_packages.yml:31 >2018-06-12 17:06:48,515 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=iproute) => { > "changed": false, > "item": "iproute", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,517 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=dbus-python) => { > "changed": false, > "item": "dbus-python", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,521 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=PyYAML) => { > "changed": false, > "item": "PyYAML", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,524 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => (item=iproute) => { > "changed": false, > "item": "iproute", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,529 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=python-ipaddress) => { > "changed": false, > "item": "python-ipaddress", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,530 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => (item=dbus-python) => { > "changed": false, > "item": "dbus-python", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,535 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=libsemanage-python) => { > "changed": false, > "item": "libsemanage-python", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,536 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => (item=PyYAML) => { > "changed": false, > "item": "PyYAML", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,539 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => (item=python-ipaddress) => { > "changed": false, > "item": "python-ipaddress", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,542 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=yum-utils) => { > "changed": false, > "item": "yum-utils", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,545 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=python-docker) => { > "changed": false, > "item": "python-docker", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,548 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => (item=libsemanage-python) => { > "changed": false, > "item": "libsemanage-python", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,551 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => (item=yum-utils) => { > "changed": false, > "item": "yum-utils", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,553 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => (item=iproute) => { > "changed": false, > "item": "iproute", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,554 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => (item=dbus-python) => { > "changed": false, > "item": "dbus-python", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,556 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => (item=PyYAML) => { > "changed": false, > "item": "PyYAML", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,557 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => (item=python-docker) => { > "changed": false, > "item": "python-docker", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,560 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => (item=python-ipaddress) => { > "changed": false, > "item": "python-ipaddress", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,565 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => (item=libsemanage-python) => { > "changed": false, > "item": "libsemanage-python", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,570 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => (item=yum-utils) => { > "changed": false, > "item": "yum-utils", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,573 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => (item=python-docker) => { > "changed": false, > "item": "python-docker", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:48,575 p=5860 u=root | META: ran handlers >2018-06-12 17:06:48,576 p=5860 u=root | META: ran handlers >2018-06-12 17:06:48,582 p=5860 u=root | PLAY [Initialize cluster facts] ************************************************************************************************************************************************************************************************************* >2018-06-12 17:06:48,589 p=5860 u=root | TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:48,705 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:06:48,729 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:06:48,752 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:06:49,118 p=5860 u=root | ok: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] >2018-06-12 17:06:49,146 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] >2018-06-12 17:06:49,176 p=5860 u=root | ok: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] >2018-06-12 17:06:49,193 p=5860 u=root | META: ran handlers >2018-06-12 17:06:49,198 p=5860 u=root | TASK [get openshift_current_version] ******************************************************************************************************************************************************************************************************** >2018-06-12 17:06:49,199 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/cluster_facts.yml:10 >2018-06-12 17:06:49,403 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py >2018-06-12 17:06:49,404 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py >2018-06-12 17:06:49,404 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_utils/library/get_current_openshift_version.py >2018-06-12 17:06:49,771 p=5860 u=root | ok: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_current_version": "3.10.0" > }, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "deployment_type": "openshift-enterprise" > } > } >} >2018-06-12 17:06:49,790 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_current_version": "3.10.0" > }, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "deployment_type": "openshift-enterprise" > } > } >} >2018-06-12 17:06:49,808 p=5860 u=root | ok: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_current_version": "3.10.0" > }, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "deployment_type": "openshift-enterprise" > } > } >} >2018-06-12 17:06:49,814 p=5860 u=root | TASK [set_fact openshift_portal_net if present on masters] ********************************************************************************************************************************************************************************** >2018-06-12 17:06:49,814 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/cluster_facts.yml:19 >2018-06-12 17:06:49,954 p=5860 u=root | ok: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_portal_net": "172.24.0.0/14" > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:49,981 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_portal_net": "172.24.0.0/14" > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:50,005 p=5860 u=root | ok: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_portal_net": "172.24.0.0/14" > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:50,012 p=5860 u=root | TASK [Gather Cluster facts] ***************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:50,012 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/cluster_facts.yml:27 >2018-06-12 17:06:50,402 p=5860 u=root | Using module file /root/openshift-ansible/roles/openshift_facts/library/openshift_facts.py >2018-06-12 17:06:50,403 p=5860 u=root | Using module file /root/openshift-ansible/roles/openshift_facts/library/openshift_facts.py >2018-06-12 17:06:50,404 p=5860 u=root | Using module file /root/openshift-ansible/roles/openshift_facts/library/openshift_facts.py >2018-06-12 17:06:50,962 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift": { > "builddefaults": { > "config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > } > } > }, > "buildoverrides": { > "config": { > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > } > } > }, > "cloudprovider": { > "kind": "aws" > }, > "common": { > "all_hostnames": [ > "kubernetes.default", > "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "54.186.168.249", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "config_base": "/etc/origin", > "dns_domain": "cluster.local", > "generate_no_proxy_hosts": true, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "internal_hostnames": [ > "kubernetes.default", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "ip": "172.31.50.118", > "kube_svc_ip": "172.24.0.1", > "no_proxy_etcd_host_ips": "172.31.50.118", > "portal_net": "172.24.0.0/14", > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249", > "raw_hostname": "ip-172-31-50-118.us-west-2.compute.internal" > }, > "current_config": { > "roles": [ > "node", > "builddefaults", > "cloudprovider", > "master", > "buildoverrides" > ] > }, > "master": { > "admission_plugin_config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > }, > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > }, > "openshift.io/ImagePolicy": { > "configuration": { > "apiVersion": "v1", > "executionRules": [ > { > "matchImageAnnotations": [ > { > "key": "images.openshift.io/deny-execution", > "value": "true" > } > ], > "name": "execution-denied", > "onResources": [ > { > "resource": "pods" > }, > { > "resource": "builds" > } > ], > "reject": true, > "skipOnResolutionFailure": true > } > ], > "kind": "ImagePolicyConfig" > } > } > }, > "api_port": "8443", > "api_server_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ] > }, > "api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "api_use_ssl": true, > "bind_addr": "0.0.0.0", > "console_path": "/console", > "console_port": "8443", > "console_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443/console", > "console_use_ssl": true, > "controller_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ], > "disable-attach-detach-reconcile-sync": [ > "true" > ] > }, > "controllers_port": "8444", > "loopback_api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "loopback_cluster_name": "ip-172-31-50-118-us-west-2-compute-internal:8443", > "loopback_context_name": "default/ip-172-31-50-118-us-west-2-compute-internal:8443/system:openshift-master", > "loopback_user": "system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443", > "named_certificates": [], > "portal_net": "172.30.0.0/16", > "public_api_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443", > "public_console_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443/console", > "registry_selector": "node-role.kubernetes.io/infra=true", > "registry_url": "registry.reg-aws.openshift.com:443/openshift3/ose-${component}:${version}", > "session_max_seconds": 3600, > "session_name": "ssn" > }, > "node": { > "bootstrapped": true, > "nodename": "ip-172-31-50-118.us-west-2.compute.internal", > "sdn_mtu": "8951" > }, > "provider": { > "metadata": { > "ami-id": "ami-f1064589", > "ami-launch-index": "0", > "ami-manifest-path": "(unknown)", > "block-device-mapping": { > "ami": "sda1", > "ebs1": "sdb", > "root": "/dev/sda1" > }, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "instance-action": "none", > "instance-id": "i-0936de393175df6ba", > "instance-type": "m5.xlarge", > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "metrics": { > "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" > }, > "network": { > "interfaces": { > "macs": { > "02:42:24:b2:8d:1a": { > "device-number": "0", > "interface-id": "eni-e0ac6b0a", > "ipv4-associations": { > "54.186.168.249": "172.31.50.118" > }, > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4s": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "owner-id": "925374498059", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4s": "54.186.168.249", > "security-group-ids": "sg-5c5ace38", > "security-groups": "default", > "subnet-id": "subnet-4879292d", > "subnet-ipv4-cidr-block": "172.31.0.0/18", > "vpc-id": "vpc-33b5f656", > "vpc-ipv4-cidr-block": "172.31.0.0/16", > "vpc-ipv4-cidr-blocks": "172.31.0.0/16" > } > } > } > }, > "placement": { > "availability-zone": "us-west-2b" > }, > "profile": "hvm", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4": "54.186.168.249", > "public-keys/": "0=libra", > "reservation-id": "r-09891556570a1d8a4", > "security-groups": "default", > "services": { > "domain": "amazonaws.com", > "partition": "aws" > } > }, > "name": "aws", > "network": { > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "interfaces": [ > { > "ips": [ > "172.31.50.118" > ], > "network_id": "subnet-4879292d", > "network_type": "vpc", > "public_ips": [ > "54.186.168.249" > ] > } > ], > "ip": "172.31.50.118", > "ipv6_enabled": false, > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249" > }, > "zone": "us-west-2b" > } > } > }, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "additive_facts_to_overwrite": [], > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "directory_mode": null, > "filter": "*", > "follow": false, > "force": null, > "gather_subset": [ > "hardware", > "network", > "virtual", > "facter" > ], > "gather_timeout": 10, > "group": null, > "local_facts": { > "generate_no_proxy_hosts": true, > "hostname": "", > "http_proxy": "", > "https_proxy": "", > "ip": "", > "no_proxy": "", > "portal_net": "172.24.0.0/14", > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "" > }, > "mode": null, > "owner": null, > "regexp": null, > "remote_src": null, > "role": "common", > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "unsafe_writes": null > } > } >} >2018-06-12 17:06:50,974 p=5860 u=root | ok: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift": { > "cloudprovider": { > "kind": "aws" > }, > "common": { > "all_hostnames": [ > "ec2-34-220-195-16.us-west-2.compute.amazonaws.com", > "34.220.195.16", > "ip-172-31-39-8.us-west-2.compute.internal", > "172.31.39.8" > ], > "config_base": "/etc/origin", > "dns_domain": "cluster.local", > "generate_no_proxy_hosts": true, > "hostname": "ip-172-31-39-8.us-west-2.compute.internal", > "internal_hostnames": [ > "ip-172-31-39-8.us-west-2.compute.internal", > "172.31.39.8" > ], > "ip": "172.31.39.8", > "kube_svc_ip": "172.24.0.1", > "portal_net": "172.24.0.0/14", > "public_hostname": "ec2-34-220-195-16.us-west-2.compute.amazonaws.com", > "public_ip": "34.220.195.16", > "raw_hostname": "ip-172-31-39-8.us-west-2.compute.internal" > }, > "current_config": { > "roles": [ > "node", > "cloudprovider" > ] > }, > "node": { > "bootstrapped": false, > "nodename": "ip-172-31-39-8.us-west-2.compute.internal", > "sdn_mtu": "8951" > }, > "provider": { > "metadata": { > "ami-id": "ami-f1064589", > "ami-launch-index": "0", > "ami-manifest-path": "(unknown)", > "block-device-mapping": { > "ami": "sda1", > "ebs1": "sdb", > "root": "/dev/sda1" > }, > "hostname": "ip-172-31-39-8.us-west-2.compute.internal", > "instance-action": "none", > "instance-id": "i-0dc5f13c05d01d15d", > "instance-type": "m5.xlarge", > "local-hostname": "ip-172-31-39-8.us-west-2.compute.internal", > "local-ipv4": "172.31.39.8", > "mac": "02:de:c2:e9:c9:b6", > "metrics": { > "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" > }, > "network": { > "interfaces": { > "macs": { > "02:de:c2:e9:c9:b6": { > "device-number": "0", > "interface-id": "eni-c5ab6c2f", > "ipv4-associations": { > "34.220.195.16": "172.31.39.8" > }, > "local-hostname": "ip-172-31-39-8.us-west-2.compute.internal", > "local-ipv4s": "172.31.39.8", > "mac": "02:de:c2:e9:c9:b6", > "owner-id": "925374498059", > "public-hostname": "ec2-34-220-195-16.us-west-2.compute.amazonaws.com", > "public-ipv4s": "34.220.195.16", > "security-group-ids": "sg-5c5ace38", > "security-groups": "default", > "subnet-id": "subnet-4879292d", > "subnet-ipv4-cidr-block": "172.31.0.0/18", > "vpc-id": "vpc-33b5f656", > "vpc-ipv4-cidr-block": "172.31.0.0/16", > "vpc-ipv4-cidr-blocks": "172.31.0.0/16" > } > } > } > }, > "placement": { > "availability-zone": "us-west-2b" > }, > "profile": "hvm", > "public-hostname": "ec2-34-220-195-16.us-west-2.compute.amazonaws.com", > "public-ipv4": "34.220.195.16", > "public-keys/": "0=libra", > "reservation-id": "r-045f013dad2cbc489", > "security-groups": "default", > "services": { > "domain": "amazonaws.com", > "partition": "aws" > } > }, > "name": "aws", > "network": { > "hostname": "ip-172-31-39-8.us-west-2.compute.internal", > "interfaces": [ > { > "ips": [ > "172.31.39.8" > ], > "network_id": "subnet-4879292d", > "network_type": "vpc", > "public_ips": [ > "34.220.195.16" > ] > } > ], > "ip": "172.31.39.8", > "ipv6_enabled": false, > "public_hostname": "ec2-34-220-195-16.us-west-2.compute.amazonaws.com", > "public_ip": "34.220.195.16" > }, > "zone": "us-west-2b" > } > } > }, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "additive_facts_to_overwrite": [], > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "directory_mode": null, > "filter": "*", > "follow": false, > "force": null, > "gather_subset": [ > "hardware", > "network", > "virtual", > "facter" > ], > "gather_timeout": 10, > "group": null, > "local_facts": { > "generate_no_proxy_hosts": true, > "hostname": "", > "http_proxy": "", > "https_proxy": "", > "ip": "", > "no_proxy": "", > "portal_net": "172.24.0.0/14", > "public_hostname": "ec2-34-220-195-16.us-west-2.compute.amazonaws.com", > "public_ip": "" > }, > "mode": null, > "owner": null, > "regexp": null, > "remote_src": null, > "role": "common", > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "unsafe_writes": null > } > } >} >2018-06-12 17:06:50,980 p=5860 u=root | ok: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift": { > "cloudprovider": { > "kind": "aws" > }, > "common": { > "all_hostnames": [ > "ec2-34-210-25-239.us-west-2.compute.amazonaws.com", > "34.210.25.239", > "ip-172-31-30-198.us-west-2.compute.internal", > "172.31.30.198" > ], > "config_base": "/etc/origin", > "dns_domain": "cluster.local", > "generate_no_proxy_hosts": true, > "hostname": "ip-172-31-30-198.us-west-2.compute.internal", > "internal_hostnames": [ > "ip-172-31-30-198.us-west-2.compute.internal", > "172.31.30.198" > ], > "ip": "172.31.30.198", > "kube_svc_ip": "172.24.0.1", > "portal_net": "172.24.0.0/14", > "public_hostname": "ec2-34-210-25-239.us-west-2.compute.amazonaws.com", > "public_ip": "34.210.25.239", > "raw_hostname": "ip-172-31-30-198.us-west-2.compute.internal" > }, > "current_config": { > "roles": [ > "node", > "cloudprovider" > ] > }, > "node": { > "bootstrapped": false, > "nodename": "ip-172-31-30-198.us-west-2.compute.internal", > "sdn_mtu": "8951" > }, > "provider": { > "metadata": { > "ami-id": "ami-f1064589", > "ami-launch-index": "0", > "ami-manifest-path": "(unknown)", > "block-device-mapping": { > "ami": "sda1", > "ebs1": "sdb", > "root": "/dev/sda1" > }, > "hostname": "ip-172-31-30-198.us-west-2.compute.internal", > "instance-action": "none", > "instance-id": "i-0813cb146f7cd8fea", > "instance-type": "m5.xlarge", > "local-hostname": "ip-172-31-30-198.us-west-2.compute.internal", > "local-ipv4": "172.31.30.198", > "mac": "02:83:84:34:36:a8", > "metrics": { > "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" > }, > "network": { > "interfaces": { > "macs": { > "02:83:84:34:36:a8": { > "device-number": "0", > "interface-id": "eni-63a86f89", > "ipv4-associations": { > "34.210.25.239": "172.31.30.198" > }, > "local-hostname": "ip-172-31-30-198.us-west-2.compute.internal", > "local-ipv4s": "172.31.30.198", > "mac": "02:83:84:34:36:a8", > "owner-id": "925374498059", > "public-hostname": "ec2-34-210-25-239.us-west-2.compute.amazonaws.com", > "public-ipv4s": "34.210.25.239", > "security-group-ids": "sg-5c5ace38", > "security-groups": "default", > "subnet-id": "subnet-4879292d", > "subnet-ipv4-cidr-block": "172.31.0.0/18", > "vpc-id": "vpc-33b5f656", > "vpc-ipv4-cidr-block": "172.31.0.0/16", > "vpc-ipv4-cidr-blocks": "172.31.0.0/16" > } > } > } > }, > "placement": { > "availability-zone": "us-west-2b" > }, > "profile": "hvm", > "public-hostname": "ec2-34-210-25-239.us-west-2.compute.amazonaws.com", > "public-ipv4": "34.210.25.239", > "public-keys/": "0=libra", > "reservation-id": "r-08c652df39e030832", > "security-groups": "default", > "services": { > "domain": "amazonaws.com", > "partition": "aws" > } > }, > "name": "aws", > "network": { > "hostname": "ip-172-31-30-198.us-west-2.compute.internal", > "interfaces": [ > { > "ips": [ > "172.31.30.198" > ], > "network_id": "subnet-4879292d", > "network_type": "vpc", > "public_ips": [ > "34.210.25.239" > ] > } > ], > "ip": "172.31.30.198", > "ipv6_enabled": false, > "public_hostname": "ec2-34-210-25-239.us-west-2.compute.amazonaws.com", > "public_ip": "34.210.25.239" > }, > "zone": "us-west-2b" > } > } > }, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "additive_facts_to_overwrite": [], > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "directory_mode": null, > "filter": "*", > "follow": false, > "force": null, > "gather_subset": [ > "hardware", > "network", > "virtual", > "facter" > ], > "gather_timeout": 10, > "group": null, > "local_facts": { > "generate_no_proxy_hosts": true, > "hostname": "", > "http_proxy": "", > "https_proxy": "", > "ip": "", > "no_proxy": "", > "portal_net": "172.24.0.0/14", > "public_hostname": "ec2-34-210-25-239.us-west-2.compute.amazonaws.com", > "public_ip": "" > }, > "mode": null, > "owner": null, > "regexp": null, > "remote_src": null, > "role": "common", > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "unsafe_writes": null > } > } >} >2018-06-12 17:06:50,989 p=5860 u=root | TASK [Set fact of no_proxy_internal_hostnames] ********************************************************************************************************************************************************************************************** >2018-06-12 17:06:50,990 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/cluster_facts.yml:41 >2018-06-12 17:06:51,015 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:51,016 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:51,091 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:51,156 p=5860 u=root | TASK [Initialize openshift.node.sdn_mtu] **************************************************************************************************************************************************************************************************** >2018-06-12 17:06:51,156 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/cluster_facts.yml:59 >2018-06-12 17:06:51,188 p=5860 u=root | Using module file /root/openshift-ansible/roles/openshift_facts/library/openshift_facts.py >2018-06-12 17:06:51,198 p=5860 u=root | Using module file /root/openshift-ansible/roles/openshift_facts/library/openshift_facts.py >2018-06-12 17:06:51,216 p=5860 u=root | Using module file /root/openshift-ansible/roles/openshift_facts/library/openshift_facts.py >2018-06-12 17:06:51,710 p=5860 u=root | changed: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift": { > "cloudprovider": { > "kind": "aws" > }, > "common": { > "all_hostnames": [ > "ec2-34-210-25-239.us-west-2.compute.amazonaws.com", > "34.210.25.239", > "ip-172-31-30-198.us-west-2.compute.internal", > "172.31.30.198" > ], > "config_base": "/etc/origin", > "dns_domain": "cluster.local", > "generate_no_proxy_hosts": true, > "hostname": "ip-172-31-30-198.us-west-2.compute.internal", > "internal_hostnames": [ > "ip-172-31-30-198.us-west-2.compute.internal", > "172.31.30.198" > ], > "ip": "172.31.30.198", > "kube_svc_ip": "172.24.0.1", > "portal_net": "172.24.0.0/14", > "public_hostname": "ec2-34-210-25-239.us-west-2.compute.amazonaws.com", > "public_ip": "34.210.25.239", > "raw_hostname": "ip-172-31-30-198.us-west-2.compute.internal" > }, > "current_config": { > "roles": [ > "node", > "cloudprovider" > ] > }, > "node": { > "bootstrapped": true, > "nodename": "ip-172-31-30-198.us-west-2.compute.internal", > "sdn_mtu": "8951" > }, > "provider": { > "metadata": { > "ami-id": "ami-f1064589", > "ami-launch-index": "0", > "ami-manifest-path": "(unknown)", > "block-device-mapping": { > "ami": "sda1", > "ebs1": "sdb", > "root": "/dev/sda1" > }, > "hostname": "ip-172-31-30-198.us-west-2.compute.internal", > "instance-action": "none", > "instance-id": "i-0813cb146f7cd8fea", > "instance-type": "m5.xlarge", > "local-hostname": "ip-172-31-30-198.us-west-2.compute.internal", > "local-ipv4": "172.31.30.198", > "mac": "02:83:84:34:36:a8", > "metrics": { > "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" > }, > "network": { > "interfaces": { > "macs": { > "02:83:84:34:36:a8": { > "device-number": "0", > "interface-id": "eni-63a86f89", > "ipv4-associations": { > "34.210.25.239": "172.31.30.198" > }, > "local-hostname": "ip-172-31-30-198.us-west-2.compute.internal", > "local-ipv4s": "172.31.30.198", > "mac": "02:83:84:34:36:a8", > "owner-id": "925374498059", > "public-hostname": "ec2-34-210-25-239.us-west-2.compute.amazonaws.com", > "public-ipv4s": "34.210.25.239", > "security-group-ids": "sg-5c5ace38", > "security-groups": "default", > "subnet-id": "subnet-4879292d", > "subnet-ipv4-cidr-block": "172.31.0.0/18", > "vpc-id": "vpc-33b5f656", > "vpc-ipv4-cidr-block": "172.31.0.0/16", > "vpc-ipv4-cidr-blocks": "172.31.0.0/16" > } > } > } > }, > "placement": { > "availability-zone": "us-west-2b" > }, > "profile": "hvm", > "public-hostname": "ec2-34-210-25-239.us-west-2.compute.amazonaws.com", > "public-ipv4": "34.210.25.239", > "public-keys/": "0=libra", > "reservation-id": "r-08c652df39e030832", > "security-groups": "default", > "services": { > "domain": "amazonaws.com", > "partition": "aws" > } > }, > "name": "aws", > "network": { > "hostname": "ip-172-31-30-198.us-west-2.compute.internal", > "interfaces": [ > { > "ips": [ > "172.31.30.198" > ], > "network_id": "subnet-4879292d", > "network_type": "vpc", > "public_ips": [ > "34.210.25.239" > ] > } > ], > "ip": "172.31.30.198", > "ipv6_enabled": false, > "public_hostname": "ec2-34-210-25-239.us-west-2.compute.amazonaws.com", > "public_ip": "34.210.25.239" > }, > "zone": "us-west-2b" > } > } > }, > "changed": true, > "failed": false, > "invocation": { > "module_args": { > "additive_facts_to_overwrite": [], > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "directory_mode": null, > "filter": "*", > "follow": false, > "force": null, > "gather_subset": [ > "hardware", > "network", > "virtual", > "facter" > ], > "gather_timeout": 10, > "group": null, > "local_facts": { > "bootstrapped": true, > "sdn_mtu": "" > }, > "mode": null, > "owner": null, > "regexp": null, > "remote_src": null, > "role": "node", > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "unsafe_writes": null > } > } >} >2018-06-12 17:06:51,737 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift": { > "builddefaults": { > "config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > } > } > }, > "buildoverrides": { > "config": { > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > } > } > }, > "cloudprovider": { > "kind": "aws" > }, > "common": { > "all_hostnames": [ > "kubernetes.default", > "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "54.186.168.249", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "config_base": "/etc/origin", > "dns_domain": "cluster.local", > "generate_no_proxy_hosts": true, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "internal_hostnames": [ > "kubernetes.default", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "ip": "172.31.50.118", > "kube_svc_ip": "172.24.0.1", > "no_proxy_etcd_host_ips": "172.31.50.118", > "portal_net": "172.24.0.0/14", > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249", > "raw_hostname": "ip-172-31-50-118.us-west-2.compute.internal" > }, > "current_config": { > "roles": [ > "node", > "builddefaults", > "cloudprovider", > "master", > "buildoverrides" > ] > }, > "master": { > "admission_plugin_config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > }, > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > }, > "openshift.io/ImagePolicy": { > "configuration": { > "apiVersion": "v1", > "executionRules": [ > { > "matchImageAnnotations": [ > { > "key": "images.openshift.io/deny-execution", > "value": "true" > } > ], > "name": "execution-denied", > "onResources": [ > { > "resource": "pods" > }, > { > "resource": "builds" > } > ], > "reject": true, > "skipOnResolutionFailure": true > } > ], > "kind": "ImagePolicyConfig" > } > } > }, > "api_port": "8443", > "api_server_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ] > }, > "api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "api_use_ssl": true, > "bind_addr": "0.0.0.0", > "console_path": "/console", > "console_port": "8443", > "console_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443/console", > "console_use_ssl": true, > "controller_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ], > "disable-attach-detach-reconcile-sync": [ > "true" > ] > }, > "controllers_port": "8444", > "loopback_api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "loopback_cluster_name": "ip-172-31-50-118-us-west-2-compute-internal:8443", > "loopback_context_name": "default/ip-172-31-50-118-us-west-2-compute-internal:8443/system:openshift-master", > "loopback_user": "system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443", > "named_certificates": [], > "portal_net": "172.30.0.0/16", > "public_api_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443", > "public_console_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443/console", > "registry_selector": "node-role.kubernetes.io/infra=true", > "registry_url": "registry.reg-aws.openshift.com:443/openshift3/ose-${component}:${version}", > "session_max_seconds": 3600, > "session_name": "ssn" > }, > "node": { > "bootstrapped": true, > "nodename": "ip-172-31-50-118.us-west-2.compute.internal", > "sdn_mtu": "8951" > }, > "provider": { > "metadata": { > "ami-id": "ami-f1064589", > "ami-launch-index": "0", > "ami-manifest-path": "(unknown)", > "block-device-mapping": { > "ami": "sda1", > "ebs1": "sdb", > "root": "/dev/sda1" > }, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "instance-action": "none", > "instance-id": "i-0936de393175df6ba", > "instance-type": "m5.xlarge", > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "metrics": { > "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" > }, > "network": { > "interfaces": { > "macs": { > "02:42:24:b2:8d:1a": { > "device-number": "0", > "interface-id": "eni-e0ac6b0a", > "ipv4-associations": { > "54.186.168.249": "172.31.50.118" > }, > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4s": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "owner-id": "925374498059", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4s": "54.186.168.249", > "security-group-ids": "sg-5c5ace38", > "security-groups": "default", > "subnet-id": "subnet-4879292d", > "subnet-ipv4-cidr-block": "172.31.0.0/18", > "vpc-id": "vpc-33b5f656", > "vpc-ipv4-cidr-block": "172.31.0.0/16", > "vpc-ipv4-cidr-blocks": "172.31.0.0/16" > } > } > } > }, > "placement": { > "availability-zone": "us-west-2b" > }, > "profile": "hvm", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4": "54.186.168.249", > "public-keys/": "0=libra", > "reservation-id": "r-09891556570a1d8a4", > "security-groups": "default", > "services": { > "domain": "amazonaws.com", > "partition": "aws" > } > }, > "name": "aws", > "network": { > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "interfaces": [ > { > "ips": [ > "172.31.50.118" > ], > "network_id": "subnet-4879292d", > "network_type": "vpc", > "public_ips": [ > "54.186.168.249" > ] > } > ], > "ip": "172.31.50.118", > "ipv6_enabled": false, > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249" > }, > "zone": "us-west-2b" > } > } > }, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "additive_facts_to_overwrite": [], > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "directory_mode": null, > "filter": "*", > "follow": false, > "force": null, > "gather_subset": [ > "hardware", > "network", > "virtual", > "facter" > ], > "gather_timeout": 10, > "group": null, > "local_facts": { > "bootstrapped": true, > "sdn_mtu": "" > }, > "mode": null, > "owner": null, > "regexp": null, > "remote_src": null, > "role": "node", > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "unsafe_writes": null > } > } >} >2018-06-12 17:06:51,746 p=5860 u=root | changed: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift": { > "cloudprovider": { > "kind": "aws" > }, > "common": { > "all_hostnames": [ > "ec2-34-220-195-16.us-west-2.compute.amazonaws.com", > "34.220.195.16", > "ip-172-31-39-8.us-west-2.compute.internal", > "172.31.39.8" > ], > "config_base": "/etc/origin", > "dns_domain": "cluster.local", > "generate_no_proxy_hosts": true, > "hostname": "ip-172-31-39-8.us-west-2.compute.internal", > "internal_hostnames": [ > "ip-172-31-39-8.us-west-2.compute.internal", > "172.31.39.8" > ], > "ip": "172.31.39.8", > "kube_svc_ip": "172.24.0.1", > "portal_net": "172.24.0.0/14", > "public_hostname": "ec2-34-220-195-16.us-west-2.compute.amazonaws.com", > "public_ip": "34.220.195.16", > "raw_hostname": "ip-172-31-39-8.us-west-2.compute.internal" > }, > "current_config": { > "roles": [ > "node", > "cloudprovider" > ] > }, > "node": { > "bootstrapped": true, > "nodename": "ip-172-31-39-8.us-west-2.compute.internal", > "sdn_mtu": "8951" > }, > "provider": { > "metadata": { > "ami-id": "ami-f1064589", > "ami-launch-index": "0", > "ami-manifest-path": "(unknown)", > "block-device-mapping": { > "ami": "sda1", > "ebs1": "sdb", > "root": "/dev/sda1" > }, > "hostname": "ip-172-31-39-8.us-west-2.compute.internal", > "instance-action": "none", > "instance-id": "i-0dc5f13c05d01d15d", > "instance-type": "m5.xlarge", > "local-hostname": "ip-172-31-39-8.us-west-2.compute.internal", > "local-ipv4": "172.31.39.8", > "mac": "02:de:c2:e9:c9:b6", > "metrics": { > "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" > }, > "network": { > "interfaces": { > "macs": { > "02:de:c2:e9:c9:b6": { > "device-number": "0", > "interface-id": "eni-c5ab6c2f", > "ipv4-associations": { > "34.220.195.16": "172.31.39.8" > }, > "local-hostname": "ip-172-31-39-8.us-west-2.compute.internal", > "local-ipv4s": "172.31.39.8", > "mac": "02:de:c2:e9:c9:b6", > "owner-id": "925374498059", > "public-hostname": "ec2-34-220-195-16.us-west-2.compute.amazonaws.com", > "public-ipv4s": "34.220.195.16", > "security-group-ids": "sg-5c5ace38", > "security-groups": "default", > "subnet-id": "subnet-4879292d", > "subnet-ipv4-cidr-block": "172.31.0.0/18", > "vpc-id": "vpc-33b5f656", > "vpc-ipv4-cidr-block": "172.31.0.0/16", > "vpc-ipv4-cidr-blocks": "172.31.0.0/16" > } > } > } > }, > "placement": { > "availability-zone": "us-west-2b" > }, > "profile": "hvm", > "public-hostname": "ec2-34-220-195-16.us-west-2.compute.amazonaws.com", > "public-ipv4": "34.220.195.16", > "public-keys/": "0=libra", > "reservation-id": "r-045f013dad2cbc489", > "security-groups": "default", > "services": { > "domain": "amazonaws.com", > "partition": "aws" > } > }, > "name": "aws", > "network": { > "hostname": "ip-172-31-39-8.us-west-2.compute.internal", > "interfaces": [ > { > "ips": [ > "172.31.39.8" > ], > "network_id": "subnet-4879292d", > "network_type": "vpc", > "public_ips": [ > "34.220.195.16" > ] > } > ], > "ip": "172.31.39.8", > "ipv6_enabled": false, > "public_hostname": "ec2-34-220-195-16.us-west-2.compute.amazonaws.com", > "public_ip": "34.220.195.16" > }, > "zone": "us-west-2b" > } > } > }, > "changed": true, > "failed": false, > "invocation": { > "module_args": { > "additive_facts_to_overwrite": [], > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "directory_mode": null, > "filter": "*", > "follow": false, > "force": null, > "gather_subset": [ > "hardware", > "network", > "virtual", > "facter" > ], > "gather_timeout": 10, > "group": null, > "local_facts": { > "bootstrapped": true, > "sdn_mtu": "" > }, > "mode": null, > "owner": null, > "regexp": null, > "remote_src": null, > "role": "node", > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "unsafe_writes": null > } > } >} >2018-06-12 17:06:51,749 p=5860 u=root | META: ran handlers >2018-06-12 17:06:51,750 p=5860 u=root | META: ran handlers >2018-06-12 17:06:51,754 p=5860 u=root | PLAY [Initialize etcd host variables] ******************************************************************************************************************************************************************************************************* >2018-06-12 17:06:51,761 p=5860 u=root | TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:51,788 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:06:52,264 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] >2018-06-12 17:06:52,282 p=5860 u=root | META: ran handlers >2018-06-12 17:06:52,288 p=5860 u=root | TASK [set_fact] ***************************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:52,288 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/cluster_facts.yml:75 >2018-06-12 17:06:52,336 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_master_etcd_hosts": [ > "ip-172-31-50-118.us-west-2.compute.internal" > ], > "openshift_master_etcd_port": "2379", > "openshift_no_proxy_etcd_host_ips": "172.31.50.118" > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:52,343 p=5860 u=root | TASK [set_fact] ***************************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:52,343 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/cluster_facts.yml:86 >2018-06-12 17:06:52,374 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_master_etcd_urls": [ > "https://ip-172-31-50-118.us-west-2.compute.internal:2379" > ] > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:52,375 p=5860 u=root | META: ran handlers >2018-06-12 17:06:52,375 p=5860 u=root | META: ran handlers >2018-06-12 17:06:52,383 p=5860 u=root | PLAY [Determine openshift_version to configure on first master] ***************************************************************************************************************************************************************************** >2018-06-12 17:06:52,390 p=5860 u=root | TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:52,417 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:06:52,791 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] >2018-06-12 17:06:52,809 p=5860 u=root | META: ran handlers >2018-06-12 17:06:52,815 p=5860 u=root | TASK [include_role] ************************************************************************************************************************************************************************************************************************* >2018-06-12 17:06:52,815 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/version.yml:5 >2018-06-12 17:06:52,863 p=5860 u=root | TASK [openshift_version : Use openshift_current_version fact as version to configure if already installed] ********************************************************************************************************************************** >2018-06-12 17:06:52,863 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_version/tasks/first_master.yml:6 >2018-06-12 17:06:52,897 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_version": "3.10.0" > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:52,904 p=5860 u=root | TASK [openshift_version : Set openshift_version to openshift_release if undefined] ********************************************************************************************************************************************************** >2018-06-12 17:06:52,904 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_version/tasks/first_master.yml:14 >2018-06-12 17:06:52,917 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:52,923 p=5860 u=root | TASK [openshift_version : debug] ************************************************************************************************************************************************************************************************************ >2018-06-12 17:06:52,924 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_version/tasks/first_master.yml:21 >2018-06-12 17:06:52,953 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "msg": "openshift_pkg_version was not defined. Falling back to -3.10.0" >} >2018-06-12 17:06:52,959 p=5860 u=root | TASK [openshift_version : set_fact] ********************************************************************************************************************************************************************************************************* >2018-06-12 17:06:52,960 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_version/tasks/first_master.yml:23 >2018-06-12 17:06:52,989 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_pkg_version": "-3.10.0*" > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:52,996 p=5860 u=root | TASK [openshift_version : debug] ************************************************************************************************************************************************************************************************************ >2018-06-12 17:06:52,996 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_version/tasks/first_master.yml:30 >2018-06-12 17:06:53,026 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "msg": "openshift_image_tag was not defined. Falling back to v3.10.0" >} >2018-06-12 17:06:53,032 p=5860 u=root | TASK [openshift_version : set_fact] ********************************************************************************************************************************************************************************************************* >2018-06-12 17:06:53,033 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_version/tasks/first_master.yml:32 >2018-06-12 17:06:53,062 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_image_tag": "v3.10.0" > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:53,069 p=5860 u=root | TASK [openshift_version : assert openshift_release in openshift_image_tag] ****************************************************************************************************************************************************************** >2018-06-12 17:06:53,069 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_version/tasks/first_master.yml:36 >2018-06-12 17:06:53,099 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "msg": "All assertions passed" >} >2018-06-12 17:06:53,107 p=5860 u=root | TASK [openshift_version : assert openshift_release in openshift_pkg_version] **************************************************************************************************************************************************************** >2018-06-12 17:06:53,107 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_version/tasks/first_master.yml:43 >2018-06-12 17:06:53,137 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "msg": "All assertions passed" >} >2018-06-12 17:06:53,144 p=5860 u=root | TASK [openshift_version : debug] ************************************************************************************************************************************************************************************************************ >2018-06-12 17:06:53,144 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_version/tasks/first_master.yml:51 >2018-06-12 17:06:53,171 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "openshift_release": "3.10" >} >2018-06-12 17:06:53,178 p=5860 u=root | TASK [openshift_version : debug] ************************************************************************************************************************************************************************************************************ >2018-06-12 17:06:53,178 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_version/tasks/first_master.yml:53 >2018-06-12 17:06:53,205 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "openshift_image_tag": "v3.10.0" >} >2018-06-12 17:06:53,212 p=5860 u=root | TASK [openshift_version : debug] ************************************************************************************************************************************************************************************************************ >2018-06-12 17:06:53,212 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_version/tasks/first_master.yml:55 >2018-06-12 17:06:53,240 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "openshift_pkg_version": "-3.10.0*" >} >2018-06-12 17:06:53,247 p=5860 u=root | TASK [openshift_version : debug] ************************************************************************************************************************************************************************************************************ >2018-06-12 17:06:53,247 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_version/tasks/first_master.yml:57 >2018-06-12 17:06:53,274 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "openshift_version": "3.10.0" >} >2018-06-12 17:06:53,281 p=5860 u=root | TASK [set openshift_version booleans (first master)] **************************************************************************************************************************************************************************************** >2018-06-12 17:06:53,281 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/version.yml:11 >2018-06-12 17:06:53,309 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_examples_content_version": "v3.10", > "openshift_version_gte_3_10": true, > "openshift_version_gte_3_11": false > }, > "changed": false, > "failed": false, > "msg": "Version facts set" >} >2018-06-12 17:06:53,309 p=5860 u=root | META: ran handlers >2018-06-12 17:06:53,310 p=5860 u=root | META: ran handlers >2018-06-12 17:06:53,319 p=5860 u=root | PLAY [Set openshift_version for etcd, node, and master hosts] ******************************************************************************************************************************************************************************* >2018-06-12 17:06:53,325 p=5860 u=root | TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:53,351 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:06:53,362 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:06:53,740 p=5860 u=root | ok: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] >2018-06-12 17:06:53,772 p=5860 u=root | ok: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] >2018-06-12 17:06:53,788 p=5860 u=root | META: ran handlers >2018-06-12 17:06:53,793 p=5860 u=root | TASK [set_fact] ***************************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:53,793 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/version.yml:26 >2018-06-12 17:06:53,845 p=5860 u=root | ok: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_image_tag": "v3.10.0", > "openshift_pkg_version": "-3.10.0*", > "openshift_version": "3.10.0" > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:53,855 p=5860 u=root | ok: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_image_tag": "v3.10.0", > "openshift_pkg_version": "-3.10.0*", > "openshift_version": "3.10.0" > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:53,861 p=5860 u=root | TASK [set openshift_version booleans (masters and nodes)] *********************************************************************************************************************************************************************************** >2018-06-12 17:06:53,861 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/version.yml:33 >2018-06-12 17:06:53,888 p=5860 u=root | ok: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_examples_content_version": "v3.10", > "openshift_version_gte_3_10": true, > "openshift_version_gte_3_11": false > }, > "changed": false, > "failed": false, > "msg": "Version facts set" >} >2018-06-12 17:06:53,898 p=5860 u=root | ok: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_examples_content_version": "v3.10", > "openshift_version_gte_3_10": true, > "openshift_version_gte_3_11": false > }, > "changed": false, > "failed": false, > "msg": "Version facts set" >} >2018-06-12 17:06:53,899 p=5860 u=root | META: ran handlers >2018-06-12 17:06:53,899 p=5860 u=root | META: ran handlers >2018-06-12 17:06:53,903 p=5860 u=root | PLAY [Verify Requirements] ****************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:53,910 p=5860 u=root | TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:53,940 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:06:54,315 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] >2018-06-12 17:06:54,333 p=5860 u=root | META: ran handlers >2018-06-12 17:06:54,341 p=5860 u=root | TASK [Run variable sanity checks] *********************************************************************************************************************************************************************************************************** >2018-06-12 17:06:54,341 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/sanity_checks.yml:14 >2018-06-12 17:06:55,177 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "msg": "Sanity Checks passed" >} >2018-06-12 17:06:55,185 p=5860 u=root | TASK [Validate openshift_node_groups and openshift_node_group_name] ************************************************************************************************************************************************************************* >2018-06-12 17:06:55,185 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/sanity_checks.yml:18 >2018-06-12 17:06:55,239 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "msg": "Node group checks passed" >} >2018-06-12 17:06:55,239 p=5860 u=root | META: ran handlers >2018-06-12 17:06:55,240 p=5860 u=root | META: ran handlers >2018-06-12 17:06:55,243 p=5860 u=root | PLAY [Initialization Checkpoint End] ******************************************************************************************************************************************************************************************************** >2018-06-12 17:06:55,245 p=5860 u=root | META: ran handlers >2018-06-12 17:06:55,251 p=5860 u=root | TASK [Set install initialization 'Complete'] ************************************************************************************************************************************************************************************************ >2018-06-12 17:06:55,251 p=5860 u=root | task path: /root/openshift-ansible/playbooks/init/main.yml:44 >2018-06-12 17:06:55,284 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_stats": { > "aggregate": true, > "data": { > "installer_phase_initialize": { > "end": "20180612170655Z", > "status": "Complete" > } > }, > "per_host": false > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:55,285 p=5860 u=root | META: ran handlers >2018-06-12 17:06:55,286 p=5860 u=root | META: ran handlers >2018-06-12 17:06:55,289 p=5860 u=root | PLAY [Health Check Checkpoint Start] ******************************************************************************************************************************************************************************************************** >2018-06-12 17:06:55,291 p=5860 u=root | META: ran handlers >2018-06-12 17:06:55,297 p=5860 u=root | TASK [Set Health Check 'In Progress'] ******************************************************************************************************************************************************************************************************* >2018-06-12 17:06:55,297 p=5860 u=root | task path: /root/openshift-ansible/playbooks/openshift-checks/private/install.yml:6 >2018-06-12 17:06:55,330 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_stats": { > "aggregate": true, > "data": { > "installer_phase_health": { > "playbook": "playbooks/openshift-checks/pre-install.yml", > "start": "20180612170655Z", > "status": "In Progress", > "title": "Health Check" > } > }, > "per_host": false > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:55,332 p=5860 u=root | META: ran handlers >2018-06-12 17:06:55,332 p=5860 u=root | META: ran handlers >2018-06-12 17:06:55,336 p=5860 u=root | PLAY [OpenShift Health Checks] ************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:55,343 p=5860 u=root | TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:55,370 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:06:55,382 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:06:55,400 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:06:55,818 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] >2018-06-12 17:06:55,843 p=5860 u=root | ok: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] >2018-06-12 17:06:55,868 p=5860 u=root | ok: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] >2018-06-12 17:06:55,886 p=5860 u=root | META: ran handlers >2018-06-12 17:06:55,886 p=5860 u=root | META: ran handlers >2018-06-12 17:06:55,899 p=5860 u=root | TASK [Run health checks (install) - EL] ***************************************************************************************************************************************************************************************************** >2018-06-12 17:06:55,899 p=5860 u=root | task path: /root/openshift-ansible/playbooks/openshift-checks/private/install.yml:24 >2018-06-12 17:06:55,971 p=5860 u=root | CHECK [docker_storage : ec2-34-220-195-16.us-west-2.compute.amazonaws.com] ****************************************************************************************************************************************************************** >2018-06-12 17:06:55,971 p=5860 u=root | CHECK [disk_availability : ec2-34-220-195-16.us-west-2.compute.amazonaws.com] *************************************************************************************************************************************************************** >2018-06-12 17:06:55,972 p=5860 u=root | CHECK [package_availability : ec2-34-220-195-16.us-west-2.compute.amazonaws.com] ************************************************************************************************************************************************************ >2018-06-12 17:06:55,972 p=5860 u=root | CHECK [package_version : ec2-34-220-195-16.us-west-2.compute.amazonaws.com] ***************************************************************************************************************************************************************** >2018-06-12 17:06:55,972 p=5860 u=root | CHECK [docker_image_availability : ec2-34-220-195-16.us-west-2.compute.amazonaws.com] ******************************************************************************************************************************************************* >2018-06-12 17:06:55,972 p=5860 u=root | CHECK [memory_availability : ec2-34-220-195-16.us-west-2.compute.amazonaws.com] ************************************************************************************************************************************************************* >2018-06-12 17:06:55,972 p=5860 u=root | CHECK [docker_storage : ec2-54-186-168-249.us-west-2.compute.amazonaws.com] ***************************************************************************************************************************************************************** >2018-06-12 17:06:55,972 p=5860 u=root | CHECK [disk_availability : ec2-54-186-168-249.us-west-2.compute.amazonaws.com] ************************************************************************************************************************************************************** >2018-06-12 17:06:55,972 p=5860 u=root | CHECK [package_availability : ec2-54-186-168-249.us-west-2.compute.amazonaws.com] *********************************************************************************************************************************************************** >2018-06-12 17:06:55,973 p=5860 u=root | CHECK [package_version : ec2-54-186-168-249.us-west-2.compute.amazonaws.com] **************************************************************************************************************************************************************** >2018-06-12 17:06:55,973 p=5860 u=root | CHECK [docker_image_availability : ec2-54-186-168-249.us-west-2.compute.amazonaws.com] ****************************************************************************************************************************************************** >2018-06-12 17:06:55,973 p=5860 u=root | CHECK [memory_availability : ec2-54-186-168-249.us-west-2.compute.amazonaws.com] ************************************************************************************************************************************************************ >2018-06-12 17:06:55,975 p=5860 u=root | ok: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "checks": { > "disk_availability": { > "skipped": true, > "skipped_reason": "Disabled by user request" > }, > "docker_image_availability": { > "skipped": true, > "skipped_reason": "Disabled by user request" > }, > "docker_storage": { > "skipped": true, > "skipped_reason": "Disabled by user request" > }, > "memory_availability": { > "skipped": true, > "skipped_reason": "Disabled by user request" > }, > "package_availability": { > "skipped": true, > "skipped_reason": "Disabled by user request" > }, > "package_version": { > "skipped": true, > "skipped_reason": "Disabled by user request" > } > }, > "failed": false, > "playbook_context": "install" >} >2018-06-12 17:06:55,977 p=5860 u=root | CHECK [docker_storage : ec2-34-210-25-239.us-west-2.compute.amazonaws.com] ****************************************************************************************************************************************************************** >2018-06-12 17:06:55,978 p=5860 u=root | CHECK [disk_availability : ec2-34-210-25-239.us-west-2.compute.amazonaws.com] *************************************************************************************************************************************************************** >2018-06-12 17:06:55,978 p=5860 u=root | CHECK [package_availability : ec2-34-210-25-239.us-west-2.compute.amazonaws.com] ************************************************************************************************************************************************************ >2018-06-12 17:06:55,978 p=5860 u=root | CHECK [package_version : ec2-34-210-25-239.us-west-2.compute.amazonaws.com] ***************************************************************************************************************************************************************** >2018-06-12 17:06:55,978 p=5860 u=root | CHECK [docker_image_availability : ec2-34-210-25-239.us-west-2.compute.amazonaws.com] ******************************************************************************************************************************************************* >2018-06-12 17:06:55,978 p=5860 u=root | CHECK [memory_availability : ec2-34-210-25-239.us-west-2.compute.amazonaws.com] ************************************************************************************************************************************************************* >2018-06-12 17:06:55,980 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "checks": { > "disk_availability": { > "skipped": true, > "skipped_reason": "Disabled by user request" > }, > "docker_image_availability": { > "skipped": true, > "skipped_reason": "Disabled by user request" > }, > "docker_storage": { > "skipped": true, > "skipped_reason": "Disabled by user request" > }, > "memory_availability": { > "skipped": true, > "skipped_reason": "Disabled by user request" > }, > "package_availability": { > "skipped": true, > "skipped_reason": "Disabled by user request" > }, > "package_version": { > "skipped": true, > "skipped_reason": "Disabled by user request" > } > }, > "failed": false, > "playbook_context": "install" >} >2018-06-12 17:06:55,982 p=5860 u=root | ok: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "checks": { > "disk_availability": { > "skipped": true, > "skipped_reason": "Disabled by user request" > }, > "docker_image_availability": { > "skipped": true, > "skipped_reason": "Disabled by user request" > }, > "docker_storage": { > "skipped": true, > "skipped_reason": "Disabled by user request" > }, > "memory_availability": { > "skipped": true, > "skipped_reason": "Disabled by user request" > }, > "package_availability": { > "skipped": true, > "skipped_reason": "Disabled by user request" > }, > "package_version": { > "skipped": true, > "skipped_reason": "Disabled by user request" > } > }, > "failed": false, > "playbook_context": "install" >} >2018-06-12 17:06:55,990 p=5860 u=root | TASK [Run health checks (install) - Fedora] ************************************************************************************************************************************************************************************************* >2018-06-12 17:06:55,990 p=5860 u=root | task path: /root/openshift-ansible/playbooks/openshift-checks/private/install.yml:36 >2018-06-12 17:06:56,017 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:56,020 p=5860 u=root | skipping: [ec2-34-210-25-239.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:56,030 p=5860 u=root | skipping: [ec2-34-220-195-16.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:06:56,031 p=5860 u=root | META: ran handlers >2018-06-12 17:06:56,035 p=5860 u=root | PLAY [Health Check Checkpoint End] ********************************************************************************************************************************************************************************************************** >2018-06-12 17:06:56,037 p=5860 u=root | META: ran handlers >2018-06-12 17:06:56,043 p=5860 u=root | TASK [Set Health Check 'Complete'] ********************************************************************************************************************************************************************************************************** >2018-06-12 17:06:56,043 p=5860 u=root | task path: /root/openshift-ansible/playbooks/openshift-checks/private/install.yml:47 >2018-06-12 17:06:56,077 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_stats": { > "aggregate": true, > "data": { > "installer_phase_health": { > "end": "20180612170656Z", > "status": "Complete" > } > }, > "per_host": false > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:56,078 p=5860 u=root | META: ran handlers >2018-06-12 17:06:56,078 p=5860 u=root | META: ran handlers >2018-06-12 17:06:56,082 p=5860 u=root | PLAY [Node Preparation Checkpoint Start] **************************************************************************************************************************************************************************************************** >2018-06-12 17:06:56,084 p=5860 u=root | META: ran handlers >2018-06-12 17:06:56,091 p=5860 u=root | TASK [Set Node preparation 'In Progress'] *************************************************************************************************************************************************************************************************** >2018-06-12 17:06:56,091 p=5860 u=root | task path: /root/openshift-ansible/playbooks/openshift-node/private/bootstrap.yml:6 >2018-06-12 17:06:56,125 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_stats": { > "aggregate": true, > "data": { > "installer_phase_node": { > "playbook": "(no entry point playbook)", > "start": "20180612170656Z", > "status": "In Progress", > "title": "Node Preparation" > } > }, > "per_host": false > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:56,126 p=5860 u=root | META: ran handlers >2018-06-12 17:06:56,126 p=5860 u=root | META: ran handlers >2018-06-12 17:06:56,130 p=5860 u=root | PLAY [Only target nodes that have not yet been bootstrapped] ******************************************************************************************************************************************************************************** >2018-06-12 17:06:56,134 p=5860 u=root | TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:56,151 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:06:56,421 p=5860 u=root | ok: [localhost] >2018-06-12 17:06:56,439 p=5860 u=root | META: ran handlers >2018-06-12 17:06:56,444 p=5860 u=root | TASK [add_host] ***************************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:56,444 p=5860 u=root | task path: /root/openshift-ansible/playbooks/openshift-node/private/bootstrap.yml:19 >2018-06-12 17:06:56,478 p=5860 u=root | creating host via 'add_host': hostname=ec2-54-186-168-249.us-west-2.compute.amazonaws.com >2018-06-12 17:06:56,487 p=5860 u=root | ok: [localhost] => (item=ec2-54-186-168-249.us-west-2.compute.amazonaws.com) => { > "add_host": { > "groups": [ > "oo_exclude_bootstrapped_nodes" > ], > "host_name": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "host_vars": {} > }, > "changed": false, > "failed": false, > "item": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com" >} >2018-06-12 17:06:56,501 p=5860 u=root | creating host via 'add_host': hostname=ec2-34-210-25-239.us-west-2.compute.amazonaws.com >2018-06-12 17:06:56,507 p=5860 u=root | ok: [localhost] => (item=ec2-34-210-25-239.us-west-2.compute.amazonaws.com) => { > "add_host": { > "groups": [ > "oo_exclude_bootstrapped_nodes" > ], > "host_name": "ec2-34-210-25-239.us-west-2.compute.amazonaws.com", > "host_vars": {} > }, > "changed": false, > "failed": false, > "item": "ec2-34-210-25-239.us-west-2.compute.amazonaws.com" >} >2018-06-12 17:06:56,517 p=5860 u=root | creating host via 'add_host': hostname=ec2-34-220-195-16.us-west-2.compute.amazonaws.com >2018-06-12 17:06:56,520 p=5860 u=root | ok: [localhost] => (item=ec2-34-220-195-16.us-west-2.compute.amazonaws.com) => { > "add_host": { > "groups": [ > "oo_exclude_bootstrapped_nodes" > ], > "host_name": "ec2-34-220-195-16.us-west-2.compute.amazonaws.com", > "host_vars": {} > }, > "changed": false, > "failed": false, > "item": "ec2-34-220-195-16.us-west-2.compute.amazonaws.com" >} >2018-06-12 17:06:56,522 p=5860 u=root | META: ran handlers >2018-06-12 17:06:56,522 p=5860 u=root | META: ran handlers >2018-06-12 17:06:56,527 p=5860 u=root | PLAY [Disable excluders] ******************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:56,527 p=5860 u=root | skipping: no hosts matched >2018-06-12 17:06:56,530 p=5860 u=root | PLAY [Configure nodes] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:56,530 p=5860 u=root | skipping: no hosts matched >2018-06-12 17:06:56,532 p=5860 u=root | PLAY [node bootstrap config] **************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:56,532 p=5860 u=root | skipping: no hosts matched >2018-06-12 17:06:56,535 p=5860 u=root | PLAY [Re-enable excluder if it was previously enabled] ************************************************************************************************************************************************************************************** >2018-06-12 17:06:56,535 p=5860 u=root | skipping: no hosts matched >2018-06-12 17:06:56,537 p=5860 u=root | PLAY [Node Preparation Checkpoint End] ****************************************************************************************************************************************************************************************************** >2018-06-12 17:06:56,540 p=5860 u=root | META: ran handlers >2018-06-12 17:06:56,546 p=5860 u=root | TASK [Set Node preparation 'Complete'] ****************************************************************************************************************************************************************************************************** >2018-06-12 17:06:56,546 p=5860 u=root | task path: /root/openshift-ansible/playbooks/openshift-node/private/bootstrap.yml:46 >2018-06-12 17:06:56,581 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_stats": { > "aggregate": true, > "data": { > "installer_phase_node": { > "end": "20180612170656Z", > "status": "Complete" > } > }, > "per_host": false > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:56,582 p=5860 u=root | META: ran handlers >2018-06-12 17:06:56,582 p=5860 u=root | META: ran handlers >2018-06-12 17:06:56,586 p=5860 u=root | PLAY [etcd Install Checkpoint Start] ******************************************************************************************************************************************************************************************************** >2018-06-12 17:06:56,588 p=5860 u=root | META: ran handlers >2018-06-12 17:06:56,595 p=5860 u=root | TASK [Set etcd install 'In Progress'] ******************************************************************************************************************************************************************************************************* >2018-06-12 17:06:56,595 p=5860 u=root | task path: /root/openshift-ansible/playbooks/openshift-etcd/private/config.yml:6 >2018-06-12 17:06:56,629 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_stats": { > "aggregate": true, > "data": { > "installer_phase_etcd": { > "playbook": "playbooks/openshift-etcd/config.yml", > "start": "20180612170656Z", > "status": "In Progress", > "title": "etcd Install" > } > }, > "per_host": false > }, > "changed": false, > "failed": false >} >2018-06-12 17:06:56,630 p=5860 u=root | META: ran handlers >2018-06-12 17:06:56,630 p=5860 u=root | META: ran handlers >2018-06-12 17:06:56,634 p=5860 u=root | PLAY [Generate new etcd CA] ***************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:56,642 p=5860 u=root | TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:56,668 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:06:57,052 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] >2018-06-12 17:06:57,070 p=5860 u=root | META: ran handlers >2018-06-12 17:06:57,077 p=5860 u=root | TASK [etcd : include_tasks] ***************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:57,077 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/ca.yml:2 >2018-06-12 17:06:57,107 p=5860 u=root | included: /root/openshift-ansible/roles/etcd/tasks/certificates/deploy_ca.yml for ec2-54-186-168-249.us-west-2.compute.amazonaws.com >2018-06-12 17:06:57,123 p=5860 u=root | TASK [etcd : Install openssl] *************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:57,123 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/deploy_ca.yml:2 >2018-06-12 17:06:57,384 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py >2018-06-12 17:06:57,748 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "attempts": 1, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "allow_downgrade": false, > "conf_file": null, > "disable_gpg_check": false, > "disablerepo": null, > "enablerepo": null, > "exclude": null, > "install_repoquery": true, > "installroot": "/", > "list": null, > "name": [ > "openssl" > ], > "security": false, > "skip_broken": false, > "state": "present", > "update_cache": false, > "validate_certs": true > } > }, > "msg": "", > "rc": 0, > "results": [ > "1:openssl-1.0.2k-12.el7.x86_64 providing openssl is already installed" > ] >} >2018-06-12 17:06:57,769 p=5860 u=root | TASK [etcd : file] ************************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:57,769 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/deploy_ca.yml:12 >2018-06-12 17:06:57,892 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:06:58,090 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/etcd/ca/certs) => { > "changed": false, > "diff": { > "after": { > "path": "/etc/etcd/ca/certs" > }, > "before": { > "path": "/etc/etcd/ca/certs" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": "root", > "mode": 448, > "original_basename": null, > "owner": "root", > "path": "/etc/etcd/ca/certs", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "directory", > "unsafe_writes": null, > "validate": null > } > }, > "item": "/etc/etcd/ca/certs", > "mode": "0700", > "owner": "root", > "path": "/etc/etcd/ca/certs", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 48, > "state": "directory", > "uid": 0 >} >2018-06-12 17:06:58,170 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:06:58,375 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/etcd/ca/crl) => { > "changed": false, > "diff": { > "after": { > "path": "/etc/etcd/ca/crl" > }, > "before": { > "path": "/etc/etcd/ca/crl" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": "root", > "mode": 448, > "original_basename": null, > "owner": "root", > "path": "/etc/etcd/ca/crl", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "directory", > "unsafe_writes": null, > "validate": null > } > }, > "item": "/etc/etcd/ca/crl", > "mode": "0700", > "owner": "root", > "path": "/etc/etcd/ca/crl", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 6, > "state": "directory", > "uid": 0 >} >2018-06-12 17:06:58,390 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:06:58,585 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/etcd/ca/fragments) => { > "changed": false, > "diff": { > "after": { > "path": "/etc/etcd/ca/fragments" > }, > "before": { > "path": "/etc/etcd/ca/fragments" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": "root", > "mode": 448, > "original_basename": null, > "owner": "root", > "path": "/etc/etcd/ca/fragments", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "directory", > "unsafe_writes": null, > "validate": null > } > }, > "item": "/etc/etcd/ca/fragments", > "mode": "0700", > "owner": "root", > "path": "/etc/etcd/ca/fragments", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 51, > "state": "directory", > "uid": 0 >} >2018-06-12 17:06:58,602 p=5860 u=root | TASK [etcd : command] *********************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:58,602 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/deploy_ca.yml:25 >2018-06-12 17:06:58,923 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:06:59,117 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "cmd": "cp /etc/pki/tls/openssl.cnf ./", > "failed": false, > "invocation": { > "module_args": { > "_raw_params": "cp /etc/pki/tls/openssl.cnf ./", > "_uses_shell": false, > "chdir": "/etc/etcd/ca/fragments", > "creates": "/etc/etcd/ca/fragments/openssl.cnf", > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "rc": 0, > "stdout": "skipped, since /etc/etcd/ca/fragments/openssl.cnf exists", > "stdout_lines": [ > "skipped, since /etc/etcd/ca/fragments/openssl.cnf exists" > ] >} >2018-06-12 17:06:59,133 p=5860 u=root | TASK [etcd : template] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:59,133 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/deploy_ca.yml:32 >2018-06-12 17:06:59,218 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:06:59,377 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:06:59,540 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "checksum": "89d806189e2eeb08170b2b3ad59048fba712608d", > "dest": "/etc/etcd/ca/fragments/openssl_append.cnf", > "diff": { > "after": { > "path": "/etc/etcd/ca/fragments/openssl_append.cnf" > }, > "before": { > "path": "/etc/etcd/ca/fragments/openssl_append.cnf" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": "True", > "content": null, > "delimiter": null, > "dest": "/etc/etcd/ca/fragments/openssl_append.cnf", > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": null, > "original_basename": "openssl_append.j2", > "owner": null, > "path": "/etc/etcd/ca/fragments/openssl_append.cnf", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "openssl_append.j2", > "state": "file", > "unsafe_writes": null, > "validate": null > } > }, > "mode": "0644", > "owner": "root", > "path": "/etc/etcd/ca/fragments/openssl_append.cnf", > "secontext": "system_u:object_r:etc_t:s0", > "size": 1624, > "state": "file", > "uid": 0 >} >2018-06-12 17:06:59,556 p=5860 u=root | TASK [etcd : assemble] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:06:59,556 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/deploy_ca.yml:39 >2018-06-12 17:06:59,882 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/assemble.py >2018-06-12 17:07:00,081 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "checksum": "0cfa42aed961a813b9d5a7bd78d2b3a030c2a3b7", > "dest": "/etc/etcd/ca/openssl.cnf", > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": false, > "content": null, > "delimiter": null, > "dest": "/etc/etcd/ca/openssl.cnf", > "directory_mode": null, > "follow": false, > "force": null, > "group": null, > "ignore_hidden": false, > "mode": null, > "owner": null, > "regexp": null, > "remote_src": false, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "/etc/etcd/ca/fragments", > "unsafe_writes": null, > "validate": null > } > }, > "md5sum": "2ff7c09c59d7481b970ad5010da1143f", > "mode": "0644", > "msg": "OK", > "owner": "root", > "secontext": "system_u:object_r:etc_t:s0", > "size": 12547, > "src": "/etc/etcd/ca/fragments", > "state": "file", > "uid": 0 >} >2018-06-12 17:07:00,097 p=5860 u=root | TASK [etcd : Check etcd_ca_db exist] ******************************************************************************************************************************************************************************************************** >2018-06-12 17:07:00,097 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/deploy_ca.yml:45 >2018-06-12 17:07:00,127 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:00,333 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/etcd/ca/index.txt" > } > }, > "stat": { > "atime": 1528820540.3622081, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "2d56bf193547df6b43dea7c339229fe921bde82c", > "ctime": 1528820540.3632083, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 29360495, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "9c5d39f67512104abe9e0f7088b5900b", > "mimetype": "text/plain", > "mode": "0644", > "mtime": 1528820540.3622081, > "nlink": 1, > "path": "/etc/etcd/ca/index.txt", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 228, > "uid": 0, > "version": "18446744071772216807", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:00,349 p=5860 u=root | TASK [etcd : Touch etcd_ca_db file] ********************************************************************************************************************************************************************************************************* >2018-06-12 17:07:00,349 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/deploy_ca.yml:52 >2018-06-12 17:07:00,367 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:00,381 p=5860 u=root | TASK [etcd : copy] ************************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:00,381 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/deploy_ca.yml:60 >2018-06-12 17:07:00,453 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:00,611 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "dest": "/etc/etcd/ca/serial", > "failed": false, > "src": "/tmp/tmpgHwtJ9" >} >2018-06-12 17:07:00,626 p=5860 u=root | TASK [etcd : Create etcd CA certificate] **************************************************************************************************************************************************************************************************** >2018-06-12 17:07:00,627 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/deploy_ca.yml:67 >2018-06-12 17:07:00,660 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:07:00,850 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "cmd": "openssl req -config /etc/etcd/ca/openssl.cnf -newkey rsa:4096 -keyout /etc/etcd/ca/ca.key -new -out /etc/etcd/ca/ca.crt -x509 -extensions etcd_v3_ca_self -batch -nodes -days 1825 -subj /CN=etcd-signer@1528823216", > "failed": false, > "invocation": { > "module_args": { > "_raw_params": "openssl req -config /etc/etcd/ca/openssl.cnf -newkey rsa:4096 -keyout /etc/etcd/ca/ca.key -new -out /etc/etcd/ca/ca.crt -x509 -extensions etcd_v3_ca_self -batch -nodes -days 1825 -subj /CN=etcd-signer@1528823216", > "_uses_shell": false, > "chdir": "/etc/etcd/ca", > "creates": "/etc/etcd/ca/ca.crt", > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "rc": 0, > "stdout": "skipped, since /etc/etcd/ca/ca.crt exists", > "stdout_lines": [ > "skipped, since /etc/etcd/ca/ca.crt exists" > ] >} >2018-06-12 17:07:00,851 p=5860 u=root | META: ran handlers >2018-06-12 17:07:00,851 p=5860 u=root | META: ran handlers >2018-06-12 17:07:00,855 p=5860 u=root | PLAY [Create etcd server certificates for etcd hosts] *************************************************************************************************************************************************************************************** >2018-06-12 17:07:00,862 p=5860 u=root | TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:00,889 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:07:01,254 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] >2018-06-12 17:07:01,272 p=5860 u=root | META: ran handlers >2018-06-12 17:07:01,273 p=5860 u=root | META: ran handlers >2018-06-12 17:07:01,279 p=5860 u=root | TASK [etcd : include_tasks] ***************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:01,280 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/server_certificates.yml:2 >2018-06-12 17:07:01,303 p=5860 u=root | included: /root/openshift-ansible/roles/etcd/tasks/ca.yml for ec2-54-186-168-249.us-west-2.compute.amazonaws.com >2018-06-12 17:07:01,311 p=5860 u=root | TASK [etcd : include_tasks] ***************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:01,312 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/ca.yml:2 >2018-06-12 17:07:01,337 p=5860 u=root | included: /root/openshift-ansible/roles/etcd/tasks/certificates/deploy_ca.yml for ec2-54-186-168-249.us-west-2.compute.amazonaws.com >2018-06-12 17:07:01,354 p=5860 u=root | TASK [etcd : Install openssl] *************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:01,354 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/deploy_ca.yml:2 >2018-06-12 17:07:01,400 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py >2018-06-12 17:07:01,772 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "attempts": 1, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "allow_downgrade": false, > "conf_file": null, > "disable_gpg_check": false, > "disablerepo": null, > "enablerepo": null, > "exclude": null, > "install_repoquery": true, > "installroot": "/", > "list": null, > "name": [ > "openssl" > ], > "security": false, > "skip_broken": false, > "state": "present", > "update_cache": false, > "validate_certs": true > } > }, > "msg": "", > "rc": 0, > "results": [ > "1:openssl-1.0.2k-12.el7.x86_64 providing openssl is already installed" > ] >} >2018-06-12 17:07:01,795 p=5860 u=root | TASK [etcd : file] ************************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:01,795 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/deploy_ca.yml:12 >2018-06-12 17:07:01,830 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:02,031 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/etcd/ca/certs) => { > "changed": false, > "diff": { > "after": { > "path": "/etc/etcd/ca/certs" > }, > "before": { > "path": "/etc/etcd/ca/certs" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": "root", > "mode": 448, > "original_basename": null, > "owner": "root", > "path": "/etc/etcd/ca/certs", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "directory", > "unsafe_writes": null, > "validate": null > } > }, > "item": "/etc/etcd/ca/certs", > "mode": "0700", > "owner": "root", > "path": "/etc/etcd/ca/certs", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 48, > "state": "directory", > "uid": 0 >} >2018-06-12 17:07:02,046 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:02,255 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/etcd/ca/crl) => { > "changed": false, > "diff": { > "after": { > "path": "/etc/etcd/ca/crl" > }, > "before": { > "path": "/etc/etcd/ca/crl" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": "root", > "mode": 448, > "original_basename": null, > "owner": "root", > "path": "/etc/etcd/ca/crl", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "directory", > "unsafe_writes": null, > "validate": null > } > }, > "item": "/etc/etcd/ca/crl", > "mode": "0700", > "owner": "root", > "path": "/etc/etcd/ca/crl", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 6, > "state": "directory", > "uid": 0 >} >2018-06-12 17:07:02,341 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:02,543 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/etcd/ca/fragments) => { > "changed": false, > "diff": { > "after": { > "path": "/etc/etcd/ca/fragments" > }, > "before": { > "path": "/etc/etcd/ca/fragments" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": "root", > "mode": 448, > "original_basename": null, > "owner": "root", > "path": "/etc/etcd/ca/fragments", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "directory", > "unsafe_writes": null, > "validate": null > } > }, > "item": "/etc/etcd/ca/fragments", > "mode": "0700", > "owner": "root", > "path": "/etc/etcd/ca/fragments", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 51, > "state": "directory", > "uid": 0 >} >2018-06-12 17:07:02,561 p=5860 u=root | TASK [etcd : command] *********************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:02,561 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/deploy_ca.yml:25 >2018-06-12 17:07:02,592 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:07:02,784 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "cmd": "cp /etc/pki/tls/openssl.cnf ./", > "failed": false, > "invocation": { > "module_args": { > "_raw_params": "cp /etc/pki/tls/openssl.cnf ./", > "_uses_shell": false, > "chdir": "/etc/etcd/ca/fragments", > "creates": "/etc/etcd/ca/fragments/openssl.cnf", > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "rc": 0, > "stdout": "skipped, since /etc/etcd/ca/fragments/openssl.cnf exists", > "stdout_lines": [ > "skipped, since /etc/etcd/ca/fragments/openssl.cnf exists" > ] >} >2018-06-12 17:07:02,800 p=5860 u=root | TASK [etcd : template] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:02,800 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/deploy_ca.yml:32 >2018-06-12 17:07:02,895 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:03,063 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:03,221 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "checksum": "89d806189e2eeb08170b2b3ad59048fba712608d", > "dest": "/etc/etcd/ca/fragments/openssl_append.cnf", > "diff": { > "after": { > "path": "/etc/etcd/ca/fragments/openssl_append.cnf" > }, > "before": { > "path": "/etc/etcd/ca/fragments/openssl_append.cnf" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": "True", > "content": null, > "delimiter": null, > "dest": "/etc/etcd/ca/fragments/openssl_append.cnf", > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": null, > "original_basename": "openssl_append.j2", > "owner": null, > "path": "/etc/etcd/ca/fragments/openssl_append.cnf", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "openssl_append.j2", > "state": "file", > "unsafe_writes": null, > "validate": null > } > }, > "mode": "0644", > "owner": "root", > "path": "/etc/etcd/ca/fragments/openssl_append.cnf", > "secontext": "system_u:object_r:etc_t:s0", > "size": 1624, > "state": "file", > "uid": 0 >} >2018-06-12 17:07:03,236 p=5860 u=root | TASK [etcd : assemble] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:03,236 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/deploy_ca.yml:39 >2018-06-12 17:07:03,267 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/assemble.py >2018-06-12 17:07:03,461 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "checksum": "0cfa42aed961a813b9d5a7bd78d2b3a030c2a3b7", > "dest": "/etc/etcd/ca/openssl.cnf", > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": false, > "content": null, > "delimiter": null, > "dest": "/etc/etcd/ca/openssl.cnf", > "directory_mode": null, > "follow": false, > "force": null, > "group": null, > "ignore_hidden": false, > "mode": null, > "owner": null, > "regexp": null, > "remote_src": false, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "/etc/etcd/ca/fragments", > "unsafe_writes": null, > "validate": null > } > }, > "md5sum": "2ff7c09c59d7481b970ad5010da1143f", > "mode": "0644", > "msg": "OK", > "owner": "root", > "secontext": "system_u:object_r:etc_t:s0", > "size": 12547, > "src": "/etc/etcd/ca/fragments", > "state": "file", > "uid": 0 >} >2018-06-12 17:07:03,477 p=5860 u=root | TASK [etcd : Check etcd_ca_db exist] ******************************************************************************************************************************************************************************************************** >2018-06-12 17:07:03,477 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/deploy_ca.yml:45 >2018-06-12 17:07:03,507 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:03,713 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/etcd/ca/index.txt" > } > }, > "stat": { > "atime": 1528823220.3103726, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "2d56bf193547df6b43dea7c339229fe921bde82c", > "ctime": 1528820540.3632083, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 29360495, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "9c5d39f67512104abe9e0f7088b5900b", > "mimetype": "text/plain", > "mode": "0644", > "mtime": 1528820540.3622081, > "nlink": 1, > "path": "/etc/etcd/ca/index.txt", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 228, > "uid": 0, > "version": "18446744071772216807", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:03,729 p=5860 u=root | TASK [etcd : Touch etcd_ca_db file] ********************************************************************************************************************************************************************************************************* >2018-06-12 17:07:03,729 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/deploy_ca.yml:52 >2018-06-12 17:07:03,746 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:03,760 p=5860 u=root | TASK [etcd : copy] ************************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:03,760 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/deploy_ca.yml:60 >2018-06-12 17:07:03,832 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:03,993 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "dest": "/etc/etcd/ca/serial", > "failed": false, > "src": "/tmp/tmpU98ECG" >} >2018-06-12 17:07:04,008 p=5860 u=root | TASK [etcd : Create etcd CA certificate] **************************************************************************************************************************************************************************************************** >2018-06-12 17:07:04,008 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/deploy_ca.yml:67 >2018-06-12 17:07:04,042 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:07:04,234 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "cmd": "openssl req -config /etc/etcd/ca/openssl.cnf -newkey rsa:4096 -keyout /etc/etcd/ca/ca.key -new -out /etc/etcd/ca/ca.crt -x509 -extensions etcd_v3_ca_self -batch -nodes -days 1825 -subj /CN=etcd-signer@1528823221", > "failed": false, > "invocation": { > "module_args": { > "_raw_params": "openssl req -config /etc/etcd/ca/openssl.cnf -newkey rsa:4096 -keyout /etc/etcd/ca/ca.key -new -out /etc/etcd/ca/ca.crt -x509 -extensions etcd_v3_ca_self -batch -nodes -days 1825 -subj /CN=etcd-signer@1528823221", > "_uses_shell": false, > "chdir": "/etc/etcd/ca", > "creates": "/etc/etcd/ca/ca.crt", > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "rc": 0, > "stdout": "skipped, since /etc/etcd/ca/ca.crt exists", > "stdout_lines": [ > "skipped, since /etc/etcd/ca/ca.crt exists" > ] >} >2018-06-12 17:07:04,242 p=5860 u=root | TASK [etcd : include_tasks] ***************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:04,242 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/server_certificates.yml:6 >2018-06-12 17:07:04,281 p=5860 u=root | included: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml for ec2-54-186-168-249.us-west-2.compute.amazonaws.com >2018-06-12 17:07:04,290 p=5860 u=root | TASK [etcd : Install etcd] ****************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:04,290 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml:2 >2018-06-12 17:07:04,306 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:04,314 p=5860 u=root | TASK [etcd : Check status of etcd certificates] ********************************************************************************************************************************************************************************************* >2018-06-12 17:07:04,314 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml:12 >2018-06-12 17:07:04,348 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:04,560 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/etcd/server.crt) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/etcd/server.crt" > } > }, > "item": "/etc/etcd/server.crt", > "stat": { > "atime": 1528820529.0822177, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 16, > "charset": "us-ascii", > "checksum": "f024d771824a4d2825edf2eda6164e2c11ca53c2", > "ctime": 1528820533.1802142, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 25258821, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "d03048ea15d7c3fbffe6550af2a60c07", > "mimetype": "text/plain", > "mode": "0600", > "mtime": 1528820521.0, > "nlink": 1, > "path": "/etc/etcd/server.crt", > "pw_name": "root", > "readable": true, > "rgrp": false, > "roth": false, > "rusr": true, > "size": 6005, > "uid": 0, > "version": "18446744073458348096", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:04,574 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:04,785 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/etcd/peer.crt) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/etcd/peer.crt" > } > }, > "item": "/etc/etcd/peer.crt", > "stat": { > "atime": 1528820529.0822177, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 16, > "charset": "us-ascii", > "checksum": "ebb2f3b7b415e9705e641f51589138932c083e91", > "ctime": 1528820534.680213, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 25558439, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "76b8ea9e614e40a0cb0f10c0a0b38463", > "mimetype": "text/plain", > "mode": "0600", > "mtime": 1528820523.0, > "nlink": 1, > "path": "/etc/etcd/peer.crt", > "pw_name": "root", > "readable": true, > "rgrp": false, > "roth": false, > "rusr": true, > "size": 6048, > "uid": 0, > "version": "18446744073694072664", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:04,800 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:05,004 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/etcd/ca.crt) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/etcd/ca.crt" > } > }, > "item": "/etc/etcd/ca.crt", > "stat": { > "atime": 1528820529.0822177, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "25b7c8fe8a0043c248099d7830c5cdd2ceee3f3a", > "ctime": 1528820532.7012146, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 25558440, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "d3bb6e4410f17f4a7fa9b13064f2b1ab", > "mimetype": "text/plain", > "mode": "0600", > "mtime": 1528820510.0, > "nlink": 1, > "path": "/etc/etcd/ca.crt", > "pw_name": "root", > "readable": true, > "rgrp": false, > "roth": false, > "rusr": true, > "size": 1895, > "uid": 0, > "version": "18446744071914685531", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:05,016 p=5860 u=root | TASK [etcd : set_fact] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:05,016 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml:22 >2018-06-12 17:07:05,047 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "etcd_server_certs_missing": false > }, > "changed": false, > "failed": false >} >2018-06-12 17:07:05,062 p=5860 u=root | TASK [etcd : Ensure generated_certs directory present] ************************************************************************************************************************************************************************************** >2018-06-12 17:07:05,062 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml:29 >2018-06-12 17:07:05,079 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:05,094 p=5860 u=root | TASK [etcd : Create the server csr] ********************************************************************************************************************************************************************************************************* >2018-06-12 17:07:05,094 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml:37 >2018-06-12 17:07:05,110 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:05,124 p=5860 u=root | TASK [etcd : Sign and create the server crt] ************************************************************************************************************************************************************************************************ >2018-06-12 17:07:05,124 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml:56 >2018-06-12 17:07:05,140 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:05,155 p=5860 u=root | TASK [etcd : Create the peer csr] *********************************************************************************************************************************************************************************************************** >2018-06-12 17:07:05,155 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml:71 >2018-06-12 17:07:05,169 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:05,183 p=5860 u=root | TASK [etcd : Sign and create the peer crt] ************************************************************************************************************************************************************************************************** >2018-06-12 17:07:05,183 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml:90 >2018-06-12 17:07:05,198 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:05,213 p=5860 u=root | TASK [etcd : file] ************************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:05,213 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml:105 >2018-06-12 17:07:05,230 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:05,244 p=5860 u=root | TASK [etcd : Create a tarball of the etcd certs] ******************************************************************************************************************************************************************************************** >2018-06-12 17:07:05,244 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml:113 >2018-06-12 17:07:05,259 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:05,274 p=5860 u=root | TASK [etcd : Retrieve etcd cert tarball] **************************************************************************************************************************************************************************************************** >2018-06-12 17:07:05,274 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml:125 >2018-06-12 17:07:05,289 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:05,297 p=5860 u=root | TASK [etcd : Ensure certificate directory exists] ******************************************************************************************************************************************************************************************* >2018-06-12 17:07:05,297 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml:134 >2018-06-12 17:07:05,313 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/etcd) => { > "changed": false, > "item": "/etc/etcd", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:05,321 p=5860 u=root | TASK [etcd : Unarchive cert tarball] ******************************************************************************************************************************************************************************************************** >2018-06-12 17:07:05,321 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml:142 >2018-06-12 17:07:05,335 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:05,349 p=5860 u=root | TASK [etcd : Create a tarball of the etcd ca certs] ***************************************************************************************************************************************************************************************** >2018-06-12 17:07:05,349 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml:148 >2018-06-12 17:07:05,364 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:05,379 p=5860 u=root | TASK [etcd : Retrieve etcd ca cert tarball] ************************************************************************************************************************************************************************************************* >2018-06-12 17:07:05,379 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml:158 >2018-06-12 17:07:05,394 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:05,401 p=5860 u=root | TASK [etcd : Ensure ca directory exists] **************************************************************************************************************************************************************************************************** >2018-06-12 17:07:05,401 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml:167 >2018-06-12 17:07:05,419 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/etcd/ca) => { > "changed": false, > "item": "/etc/etcd/ca", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:05,431 p=5860 u=root | TASK [etcd : Delete temporary directory] **************************************************************************************************************************************************************************************************** >2018-06-12 17:07:05,431 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml:175 >2018-06-12 17:07:05,445 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:05,453 p=5860 u=root | TASK [etcd : Validate permissions on certificate files] ************************************************************************************************************************************************************************************* >2018-06-12 17:07:05,453 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml:180 >2018-06-12 17:07:05,488 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:05,697 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/etcd/ca.crt) => { > "changed": false, > "diff": { > "after": { > "path": "/etc/etcd/ca.crt" > }, > "before": { > "path": "/etc/etcd/ca.crt" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": 384, > "original_basename": null, > "owner": null, > "path": "/etc/etcd/ca.crt", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": null, > "unsafe_writes": null, > "validate": null > } > }, > "item": "/etc/etcd/ca.crt", > "mode": "0600", > "owner": "root", > "path": "/etc/etcd/ca.crt", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 1895, > "state": "file", > "uid": 0 >} >2018-06-12 17:07:05,710 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:05,911 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/etcd/server.crt) => { > "changed": false, > "diff": { > "after": { > "path": "/etc/etcd/server.crt" > }, > "before": { > "path": "/etc/etcd/server.crt" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": 384, > "original_basename": null, > "owner": null, > "path": "/etc/etcd/server.crt", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": null, > "unsafe_writes": null, > "validate": null > } > }, > "item": "/etc/etcd/server.crt", > "mode": "0600", > "owner": "root", > "path": "/etc/etcd/server.crt", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 6005, > "state": "file", > "uid": 0 >} >2018-06-12 17:07:05,927 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:06,128 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/etcd/server.key) => { > "changed": false, > "diff": { > "after": { > "path": "/etc/etcd/server.key" > }, > "before": { > "path": "/etc/etcd/server.key" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": 384, > "original_basename": null, > "owner": null, > "path": "/etc/etcd/server.key", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": null, > "unsafe_writes": null, > "validate": null > } > }, > "item": "/etc/etcd/server.key", > "mode": "0600", > "owner": "root", > "path": "/etc/etcd/server.key", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 1704, > "state": "file", > "uid": 0 >} >2018-06-12 17:07:06,138 p=5860 u=root | TASK [etcd : Validate permissions on peer certificate files] ******************************************************************************************************************************************************************************** >2018-06-12 17:07:06,138 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml:190 >2018-06-12 17:07:06,173 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:06,380 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/etcd/ca.crt) => { > "changed": false, > "diff": { > "after": { > "path": "/etc/etcd/ca.crt" > }, > "before": { > "path": "/etc/etcd/ca.crt" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": 384, > "original_basename": null, > "owner": null, > "path": "/etc/etcd/ca.crt", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": null, > "unsafe_writes": null, > "validate": null > } > }, > "item": "/etc/etcd/ca.crt", > "mode": "0600", > "owner": "root", > "path": "/etc/etcd/ca.crt", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 1895, > "state": "file", > "uid": 0 >} >2018-06-12 17:07:06,393 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:06,594 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/etcd/peer.crt) => { > "changed": false, > "diff": { > "after": { > "path": "/etc/etcd/peer.crt" > }, > "before": { > "path": "/etc/etcd/peer.crt" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": 384, > "original_basename": null, > "owner": null, > "path": "/etc/etcd/peer.crt", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": null, > "unsafe_writes": null, > "validate": null > } > }, > "item": "/etc/etcd/peer.crt", > "mode": "0600", > "owner": "root", > "path": "/etc/etcd/peer.crt", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 6048, > "state": "file", > "uid": 0 >} >2018-06-12 17:07:06,607 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:06,804 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/etcd/peer.key) => { > "changed": false, > "diff": { > "after": { > "path": "/etc/etcd/peer.key" > }, > "before": { > "path": "/etc/etcd/peer.key" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": 384, > "original_basename": null, > "owner": null, > "path": "/etc/etcd/peer.key", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": null, > "unsafe_writes": null, > "validate": null > } > }, > "item": "/etc/etcd/peer.key", > "mode": "0600", > "owner": "root", > "path": "/etc/etcd/peer.key", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 1704, > "state": "file", > "uid": 0 >} >2018-06-12 17:07:06,813 p=5860 u=root | TASK [etcd : Validate permissions on the config dir] **************************************************************************************************************************************************************************************** >2018-06-12 17:07:06,813 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_server_certificates_from_ca.yml:200 >2018-06-12 17:07:06,840 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:07,040 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "diff": { > "after": { > "path": "/etc/etcd" > }, > "before": { > "path": "/etc/etcd" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": 448, > "original_basename": null, > "owner": null, > "path": "/etc/etcd", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "directory", > "unsafe_writes": null, > "validate": null > } > }, > "mode": "0700", > "owner": "root", > "path": "/etc/etcd", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 172, > "state": "directory", > "uid": 0 >} >2018-06-12 17:07:07,041 p=5860 u=root | META: ran handlers >2018-06-12 17:07:07,045 p=5860 u=root | PLAY [Create etcd client certificates for master hosts] ************************************************************************************************************************************************************************************* >2018-06-12 17:07:07,053 p=5860 u=root | TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:07,078 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:07:07,456 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] >2018-06-12 17:07:07,474 p=5860 u=root | META: ran handlers >2018-06-12 17:07:07,480 p=5860 u=root | TASK [etcd : include_tasks] ***************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:07,481 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/client_certificates.yml:2 >2018-06-12 17:07:07,516 p=5860 u=root | included: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_client_certificates_from_ca.yml for ec2-54-186-168-249.us-west-2.compute.amazonaws.com >2018-06-12 17:07:07,533 p=5860 u=root | TASK [etcd : Ensure CA certificate exists on etcd_ca_host] ********************************************************************************************************************************************************************************** >2018-06-12 17:07:07,533 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_client_certificates_from_ca.yml:2 >2018-06-12 17:07:07,565 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:07,774 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/etcd/ca/ca.crt" > } > }, > "stat": { > "atime": 1528820541.4162073, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "25b7c8fe8a0043c248099d7830c5cdd2ceee3f3a", > "ctime": 1528820540.8922079, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 29360491, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "d3bb6e4410f17f4a7fa9b13064f2b1ab", > "mimetype": "text/plain", > "mode": "0644", > "mtime": 1528820510.4432333, > "nlink": 3, > "path": "/etc/etcd/ca/ca.crt", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 1895, > "uid": 0, > "version": "18446744072104603026", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:07,782 p=5860 u=root | TASK [etcd : fail] ************************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:07,782 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_client_certificates_from_ca.yml:9 >2018-06-12 17:07:07,796 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:07,804 p=5860 u=root | TASK [etcd : Check status of external etcd certificatees] *********************************************************************************************************************************************************************************** >2018-06-12 17:07:07,804 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_client_certificates_from_ca.yml:17 >2018-06-12 17:07:07,838 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:08,052 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=master.etcd-client.crt) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/master.etcd-client.crt" > } > }, > "item": "master.etcd-client.crt", > "stat": { > "atime": 1528820584.8802187, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 16, > "charset": "us-ascii", > "checksum": "fe924aab514ad9e09cd3412f19959bb9867560fc", > "ctime": 1528820546.905204, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 159383737, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "8e31d5d2ca82d8837e3dba8feafd8833", > "mimetype": "text/plain", > "mode": "0600", > "mtime": 1528820540.0, > "nlink": 1, > "path": "/etc/origin/master/master.etcd-client.crt", > "pw_name": "root", > "readable": true, > "rgrp": false, > "roth": false, > "rusr": true, > "size": 6005, > "uid": 0, > "version": "1274199518", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:08,069 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:08,279 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=master.etcd-client.key) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/master.etcd-client.key" > } > }, > "item": "master.etcd-client.key", > "stat": { > "atime": 1528820584.8802187, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "c6548f788ec720042e0aa71ed069c54aa29c9dd0", > "ctime": 1528820547.3902042, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 159383735, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "98cddd97c0eeb2d35f6e42047a9815fd", > "mimetype": "text/plain", > "mode": "0600", > "mtime": 1528820539.0, > "nlink": 1, > "path": "/etc/origin/master/master.etcd-client.key", > "pw_name": "root", > "readable": true, > "rgrp": false, > "roth": false, > "rusr": true, > "size": 1704, > "uid": 0, > "version": "1344167219", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:08,295 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:08,497 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=master.etcd-ca.crt) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/master.etcd-ca.crt" > } > }, > "item": "master.etcd-ca.crt", > "stat": { > "atime": 1528820546.0122037, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "25b7c8fe8a0043c248099d7830c5cdd2ceee3f3a", > "ctime": 1528820547.8742044, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 159383738, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "d3bb6e4410f17f4a7fa9b13064f2b1ab", > "mimetype": "text/plain", > "mode": "0600", > "mtime": 1528820510.0, > "nlink": 1, > "path": "/etc/origin/master/master.etcd-ca.crt", > "pw_name": "root", > "readable": true, > "rgrp": false, > "roth": false, > "rusr": true, > "size": 1895, > "uid": 0, > "version": "1561665387", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:08,509 p=5860 u=root | TASK [etcd : set_fact] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:08,509 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_client_certificates_from_ca.yml:27 >2018-06-12 17:07:08,541 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "etcd_client_certs_missing": false > }, > "changed": false, > "failed": false >} >2018-06-12 17:07:08,556 p=5860 u=root | TASK [etcd : Ensure generated_certs directory present] ************************************************************************************************************************************************************************************** >2018-06-12 17:07:08,556 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_client_certificates_from_ca.yml:34 >2018-06-12 17:07:08,572 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:08,587 p=5860 u=root | TASK [etcd : Create the client csr] ********************************************************************************************************************************************************************************************************* >2018-06-12 17:07:08,587 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_client_certificates_from_ca.yml:42 >2018-06-12 17:07:08,602 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:08,617 p=5860 u=root | TASK [etcd : Sign and create the client crt] ************************************************************************************************************************************************************************************************ >2018-06-12 17:07:08,617 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_client_certificates_from_ca.yml:61 >2018-06-12 17:07:08,632 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:08,646 p=5860 u=root | TASK [etcd : file] ************************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:08,646 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_client_certificates_from_ca.yml:76 >2018-06-12 17:07:08,662 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:08,677 p=5860 u=root | TASK [etcd : Create a tarball of the etcd certs] ******************************************************************************************************************************************************************************************** >2018-06-12 17:07:08,677 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_client_certificates_from_ca.yml:84 >2018-06-12 17:07:08,692 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:08,707 p=5860 u=root | TASK [etcd : Retrieve the etcd cert tarballs] *********************************************************************************************************************************************************************************************** >2018-06-12 17:07:08,707 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_client_certificates_from_ca.yml:96 >2018-06-12 17:07:08,722 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:08,729 p=5860 u=root | TASK [etcd : Ensure certificate directory exists] ******************************************************************************************************************************************************************************************* >2018-06-12 17:07:08,730 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_client_certificates_from_ca.yml:105 >2018-06-12 17:07:08,744 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:08,751 p=5860 u=root | TASK [etcd : Unarchive etcd cert tarballs] ************************************************************************************************************************************************************************************************** >2018-06-12 17:07:08,752 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_client_certificates_from_ca.yml:111 >2018-06-12 17:07:08,765 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:08,776 p=5860 u=root | TASK [etcd : Delete temporary directory] **************************************************************************************************************************************************************************************************** >2018-06-12 17:07:08,776 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_client_certificates_from_ca.yml:117 >2018-06-12 17:07:08,790 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:08,797 p=5860 u=root | TASK [etcd : file] ************************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:08,797 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/certificates/fetch_client_certificates_from_ca.yml:122 >2018-06-12 17:07:08,818 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=master.etcd-client.crt) => { > "changed": false, > "item": "master.etcd-client.crt", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:08,824 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=master.etcd-client.key) => { > "changed": false, > "item": "master.etcd-client.key", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:08,828 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=master.etcd-ca.crt) => { > "changed": false, > "item": "master.etcd-ca.crt", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:08,829 p=5860 u=root | META: ran handlers >2018-06-12 17:07:08,829 p=5860 u=root | META: ran handlers >2018-06-12 17:07:08,833 p=5860 u=root | PLAY [Configure etcd] *********************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:08,842 p=5860 u=root | TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:08,868 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:07:09,244 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] >2018-06-12 17:07:09,262 p=5860 u=root | META: ran handlers >2018-06-12 17:07:09,269 p=5860 u=root | TASK [fail] ********************************************************************************************************************************************************************************************************************************* >2018-06-12 17:07:09,269 p=5860 u=root | task path: /root/openshift-ansible/playbooks/openshift-etcd/private/config.yml:24 >2018-06-12 17:07:09,282 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:09,289 p=5860 u=root | TASK [etcd : set etcd host and ip facts] **************************************************************************************************************************************************************************************************** >2018-06-12 17:07:09,289 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/set_facts.yml:2 >2018-06-12 17:07:09,323 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "etcd_hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "etcd_ip": "172.31.50.118" > }, > "changed": false, > "failed": false >} >2018-06-12 17:07:09,331 p=5860 u=root | TASK [etcd : Check that etcd image is present] ********************************************************************************************************************************************************************************************** >2018-06-12 17:07:09,331 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/static.yml:5 >2018-06-12 17:07:09,365 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:07:09,587 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": true, > "cmd": [ > "docker", > "images", > "-q", > "registry.reg-aws.openshift.com:443/rhel7/etcd:3.2.15" > ], > "delta": "0:00:00.022735", > "end": "2018-06-12 17:07:09.571422", > "failed": false, > "invocation": { > "module_args": { > "_raw_params": "docker images -q \"registry.reg-aws.openshift.com:443/rhel7/etcd:3.2.15\"", > "_uses_shell": false, > "chdir": null, > "creates": null, > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "rc": 0, > "start": "2018-06-12 17:07:09.548687", > "stderr": "", > "stderr_lines": [], > "stdout": "4f35b6516d22", > "stdout_lines": [ > "4f35b6516d22" > ] >} >2018-06-12 17:07:09,595 p=5860 u=root | TASK [etcd : Pre-pull etcd image] *********************************************************************************************************************************************************************************************************** >2018-06-12 17:07:09,595 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/static.yml:9 >2018-06-12 17:07:09,612 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:09,619 p=5860 u=root | TASK [etcd : Add iptables allow rules] ****************************************************************************************************************************************************************************************************** >2018-06-12 17:07:09,619 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/firewall.yml:4 >2018-06-12 17:07:09,841 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_utils/library/os_firewall_manage_iptables.py >2018-06-12 17:07:10,042 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={u'port': u'2379/tcp', u'service': u'etcd'}) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "action": "add", > "chain": "OS_FIREWALL_ALLOW", > "create_jump_rule": true, > "ip_version": "ipv4", > "jump_rule_chain": "INPUT", > "name": "etcd", > "port": "2379", > "protocol": "tcp" > } > }, > "item": { > "port": "2379/tcp", > "service": "etcd" > }, > "output": [] >} >2018-06-12 17:07:10,075 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_utils/library/os_firewall_manage_iptables.py >2018-06-12 17:07:10,273 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={u'port': u'2380/tcp', u'service': u'etcd peering'}) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "action": "add", > "chain": "OS_FIREWALL_ALLOW", > "create_jump_rule": true, > "ip_version": "ipv4", > "jump_rule_chain": "INPUT", > "name": "etcd peering", > "port": "2380", > "protocol": "tcp" > } > }, > "item": { > "port": "2380/tcp", > "service": "etcd peering" > }, > "output": [] >} >2018-06-12 17:07:10,282 p=5860 u=root | TASK [etcd : Remove iptables rules] ********************************************************************************************************************************************************************************************************* >2018-06-12 17:07:10,282 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/firewall.yml:13 >2018-06-12 17:07:10,302 p=5860 u=root | TASK [etcd : Add firewalld allow rules] ***************************************************************************************************************************************************************************************************** >2018-06-12 17:07:10,302 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/firewall.yml:24 >2018-06-12 17:07:10,328 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={u'port': u'2379/tcp', u'service': u'etcd'}) => { > "changed": false, > "item": { > "port": "2379/tcp", > "service": "etcd" > }, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:10,339 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={u'port': u'2380/tcp', u'service': u'etcd peering'}) => { > "changed": false, > "item": { > "port": "2380/tcp", > "service": "etcd peering" > }, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:10,346 p=5860 u=root | TASK [etcd : Remove firewalld allow rules] ************************************************************************************************************************************************************************************************** >2018-06-12 17:07:10,347 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/firewall.yml:33 >2018-06-12 17:07:10,366 p=5860 u=root | TASK [etcd : Ensure etcd datadir exists] **************************************************************************************************************************************************************************************************** >2018-06-12 17:07:10,367 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/static.yml:24 >2018-06-12 17:07:10,395 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:10,593 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "diff": { > "after": { > "path": "/var/lib/etcd/" > }, > "before": { > "path": "/var/lib/etcd/" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": 448, > "original_basename": null, > "owner": null, > "path": "/var/lib/etcd/", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "directory", > "unsafe_writes": null, > "validate": null > } > }, > "mode": "0700", > "owner": "root", > "path": "/var/lib/etcd/", > "secontext": "unconfined_u:object_r:var_lib_t:s0", > "size": 6, > "state": "directory", > "uid": 0 >} >2018-06-12 17:07:10,601 p=5860 u=root | TASK [etcd : Validate permissions on the config dir] **************************************************************************************************************************************************************************************** >2018-06-12 17:07:10,601 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/static.yml:30 >2018-06-12 17:07:10,630 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:10,829 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "diff": { > "after": { > "path": "/etc/etcd" > }, > "before": { > "path": "/etc/etcd" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": 448, > "original_basename": null, > "owner": null, > "path": "/etc/etcd", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "directory", > "unsafe_writes": null, > "validate": null > } > }, > "mode": "0700", > "owner": "root", > "path": "/etc/etcd", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 172, > "state": "directory", > "uid": 0 >} >2018-06-12 17:07:10,837 p=5860 u=root | TASK [etcd : Validate permissions on the static pods dir] *********************************************************************************************************************************************************************************** >2018-06-12 17:07:10,837 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/static.yml:36 >2018-06-12 17:07:10,867 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:11,066 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": true, > "diff": { > "after": { > "mode": "0700", > "path": "/etc/origin/node/pods/" > }, > "before": { > "mode": "0755", > "path": "/etc/origin/node/pods/" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": "root", > "mode": 448, > "original_basename": null, > "owner": "root", > "path": "/etc/origin/node/pods/", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "directory", > "unsafe_writes": null, > "validate": null > } > }, > "mode": "0700", > "owner": "root", > "path": "/etc/origin/node/pods/", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 68, > "state": "directory", > "uid": 0 >} >2018-06-12 17:07:11,074 p=5860 u=root | TASK [etcd : Write etcd global config file] ************************************************************************************************************************************************************************************************* >2018-06-12 17:07:11,074 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/static.yml:44 >2018-06-12 17:07:11,196 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:11,358 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:11,518 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "checksum": "526a1a4b22e362d8ce88b068f1fc468c0570a6a9", > "dest": "/etc/etcd/etcd.conf", > "diff": { > "after": { > "path": "/etc/etcd/etcd.conf" > }, > "before": { > "path": "/etc/etcd/etcd.conf" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": "True", > "content": null, > "delimiter": null, > "dest": "/etc/etcd/etcd.conf", > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": null, > "original_basename": "etcd.conf.j2", > "owner": null, > "path": "/etc/etcd/etcd.conf", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "etcd.conf.j2", > "state": "file", > "unsafe_writes": null, > "validate": null > } > }, > "mode": "0644", > "owner": "root", > "path": "/etc/etcd/etcd.conf", > "secontext": "system_u:object_r:etc_t:s0", > "size": 1489, > "state": "file", > "uid": 0 >} >2018-06-12 17:07:11,526 p=5860 u=root | TASK [etcd : Create temp directory for static pods] ***************************************************************************************************************************************************************************************** >2018-06-12 17:07:11,526 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/static.yml:50 >2018-06-12 17:07:11,556 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:07:11,754 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "cmd": [ > "mktemp", > "-d", > "/tmp/openshift-ansible-XXXXXX" > ], > "delta": "0:00:00.002301", > "end": "2018-06-12 17:07:11.741104", > "failed": false, > "invocation": { > "module_args": { > "_raw_params": "mktemp -d /tmp/openshift-ansible-XXXXXX", > "_uses_shell": false, > "chdir": null, > "creates": null, > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "rc": 0, > "start": "2018-06-12 17:07:11.738803", > "stderr": "", > "stderr_lines": [], > "stdout": "/tmp/openshift-ansible-HQYMYb", > "stdout_lines": [ > "/tmp/openshift-ansible-HQYMYb" > ] >} >2018-06-12 17:07:11,762 p=5860 u=root | TASK [etcd : Prepare etcd static pod] ******************************************************************************************************************************************************************************************************* >2018-06-12 17:07:11,762 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/static.yml:55 >2018-06-12 17:07:11,836 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:11,994 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:12,353 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py >2018-06-12 17:07:12,518 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=etcd.yaml) => { > "changed": true, > "checksum": "83170b7c8b4a861fcf1946dd1127d63cdb7d8d10", > "dest": "/tmp/openshift-ansible-HQYMYb/etcd.yaml", > "diff": [], > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": false, > "content": null, > "delimiter": null, > "dest": "/tmp/openshift-ansible-HQYMYb/etcd.yaml", > "directory_mode": null, > "follow": false, > "force": true, > "group": null, > "local_follow": null, > "mode": 384, > "original_basename": "etcd.yaml", > "owner": null, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "/root/.ansible/tmp/ansible-tmp-1528823231.79-134164798802896/source", > "unsafe_writes": null, > "validate": null > } > }, > "item": "etcd.yaml", > "md5sum": "cc7ee722730bb4a72c5079d6bf6c16d4", > "mode": "0600", > "owner": "root", > "secontext": "unconfined_u:object_r:admin_home_t:s0", > "size": 903, > "src": "/root/.ansible/tmp/ansible-tmp-1528823231.79-134164798802896/source", > "state": "file", > "uid": 0 >} >2018-06-12 17:07:12,528 p=5860 u=root | TASK [etcd : Update etcd static pod] ******************************************************************************************************************************************************************************************************** >2018-06-12 17:07:12,528 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/static.yml:63 >2018-06-12 17:07:12,736 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_utils/library/yedit.py >2018-06-12 17:07:13,005 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=etcd.yaml) => { > "changed": true, > "failed": false, > "invocation": { > "module_args": { > "append": false, > "backup": false, > "backup_ext": ".20180612T170712", > "content": null, > "content_type": "yaml", > "curr_value": null, > "curr_value_format": "yaml", > "debug": false, > "edits": [ > { > "key": "spec.containers[0].image", > "value": "registry.reg-aws.openshift.com:443/rhel7/etcd:3.2.15" > } > ], > "index": null, > "key": "", > "separator": ".", > "src": "/tmp/openshift-ansible-HQYMYb/etcd.yaml", > "state": "present", > "update": false, > "value": null, > "value_type": "" > } > }, > "item": "etcd.yaml", > "result": [ > { > "edit": { > "apiVersion": "v1", > "kind": "Pod", > "metadata": { > "annotations": { > "scheduler.alpha.kubernetes.io/critical-pod": "" > }, > "labels": { > "openshift.io/component": "etcd", > "openshift.io/control-plane": "true" > }, > "name": "master-etcd", > "namespace": "kube-system" > }, > "spec": { > "containers": [ > { > "args": [ > "#!/bin/sh\nset -o allexport\nsource /etc/etcd/etcd.conf\nexec etcd\n" > ], > "command": [ > "/bin/sh", > "-c" > ], > "image": "registry.reg-aws.openshift.com:443/rhel7/etcd:3.2.15", > "livenessProbe": { > "exec": null, > "initialDelaySeconds": 45 > }, > "name": "etcd", > "securityContext": { > "privileged": true > }, > "volumeMounts": [ > { > "mountPath": "/etc/etcd/", > "name": "master-config", > "readOnly": true > }, > { > "mountPath": "/var/lib/etcd/", > "name": "master-data" > } > ], > "workingDir": "/var/lib/etcd" > } > ], > "hostNetwork": true, > "restartPolicy": "Always", > "volumes": [ > { > "hostPath": { > "path": "/etc/etcd/" > }, > "name": "master-config" > }, > { > "hostPath": { > "path": "/var/lib/etcd" > }, > "name": "master-data" > } > ] > } > }, > "key": "spec.containers[0].image" > } > ], > "state": "present" >} >2018-06-12 17:07:13,015 p=5860 u=root | TASK [etcd : Set etcd host as a probe target host] ****************************************************************************************************************************************************************************************** >2018-06-12 17:07:13,015 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/static.yml:72 >2018-06-12 17:07:13,050 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_utils/library/yedit.py >2018-06-12 17:07:13,284 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=etcd.yaml) => { > "changed": true, > "failed": false, > "invocation": { > "module_args": { > "append": false, > "backup": false, > "backup_ext": ".20180612T170713", > "content": null, > "content_type": "yaml", > "curr_value": null, > "curr_value_format": "yaml", > "debug": false, > "edits": [ > { > "key": "spec.containers[0].livenessProbe.exec.command", > "value": [ > "etcdctl", > "--cert-file", > "/etc/etcd/peer.crt", > "--key-file", > "/etc/etcd/peer.key", > "--ca-file", > "/etc/etcd/ca.crt", > "-C", > "https://172.31.50.118:2379", > "cluster-health" > ] > } > ], > "index": null, > "key": "", > "separator": ".", > "src": "/tmp/openshift-ansible-HQYMYb/etcd.yaml", > "state": "present", > "update": false, > "value": null, > "value_type": "" > } > }, > "item": "etcd.yaml", > "result": [ > { > "edit": { > "apiVersion": "v1", > "kind": "Pod", > "metadata": { > "annotations": { > "scheduler.alpha.kubernetes.io/critical-pod": "" > }, > "labels": { > "openshift.io/component": "etcd", > "openshift.io/control-plane": "true" > }, > "name": "master-etcd", > "namespace": "kube-system" > }, > "spec": { > "containers": [ > { > "args": [ > "#!/bin/sh\nset -o allexport\nsource /etc/etcd/etcd.conf\nexec etcd\n" > ], > "command": [ > "/bin/sh", > "-c" > ], > "image": "registry.reg-aws.openshift.com:443/rhel7/etcd:3.2.15", > "livenessProbe": { > "exec": { > "command": [ > "etcdctl", > "--cert-file", > "/etc/etcd/peer.crt", > "--key-file", > "/etc/etcd/peer.key", > "--ca-file", > "/etc/etcd/ca.crt", > "-C", > "https://172.31.50.118:2379", > "cluster-health" > ] > }, > "initialDelaySeconds": 45 > }, > "name": "etcd", > "securityContext": { > "privileged": true > }, > "volumeMounts": [ > { > "mountPath": "/etc/etcd/", > "name": "master-config", > "readOnly": true > }, > { > "mountPath": "/var/lib/etcd/", > "name": "master-data" > } > ], > "workingDir": "/var/lib/etcd" > } > ], > "hostNetwork": true, > "restartPolicy": "Always", > "volumes": [ > { > "hostPath": { > "path": "/etc/etcd/" > }, > "name": "master-config" > }, > { > "hostPath": { > "path": "/var/lib/etcd" > }, > "name": "master-data" > } > ] > } > }, > "key": "spec.containers[0].livenessProbe.exec.command" > } > ], > "state": "present" >} >2018-06-12 17:07:13,294 p=5860 u=root | TASK [etcd : Deploy etcd static pod] ******************************************************************************************************************************************************************************************************** >2018-06-12 17:07:13,294 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/static.yml:91 >2018-06-12 17:07:13,327 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py >2018-06-12 17:07:13,527 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=etcd.yaml) => { > "changed": false, > "checksum": "09be073e474558439175a1181b3980700f16ddff", > "dest": "/etc/origin/node/pods/etcd.yaml", > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": false, > "content": null, > "delimiter": null, > "dest": "/etc/origin/node/pods/etcd.yaml", > "directory_mode": null, > "follow": false, > "force": true, > "group": null, > "local_follow": null, > "mode": 384, > "original_basename": null, > "owner": null, > "regexp": null, > "remote_src": true, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "/tmp/openshift-ansible-HQYMYb/etcd.yaml", > "unsafe_writes": null, > "validate": null > } > }, > "item": "etcd.yaml", > "md5sum": "0763ee69f2ed490008fb4193b0d40732", > "mode": "0600", > "owner": "root", > "secontext": "system_u:object_r:etc_t:s0", > "size": 1194, > "src": "/tmp/openshift-ansible-HQYMYb/etcd.yaml", > "state": "file", > "uid": 0 >} >2018-06-12 17:07:13,536 p=5860 u=root | TASK [etcd : Remove temp directory] ********************************************************************************************************************************************************************************************************* >2018-06-12 17:07:13,536 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/static.yml:100 >2018-06-12 17:07:13,565 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:13,774 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "diff": { > "after": { > "path": "/tmp/openshift-ansible-HQYMYb", > "state": "absent" > }, > "before": { > "path": "/tmp/openshift-ansible-HQYMYb", > "state": "directory" > } > }, > "failed": false, > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": null, > "name": "/tmp/openshift-ansible-HQYMYb", > "original_basename": null, > "owner": null, > "path": "/tmp/openshift-ansible-HQYMYb", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "absent", > "unsafe_writes": null, > "validate": null > } > }, > "path": "/tmp/openshift-ansible-HQYMYb", > "state": "absent" >} >2018-06-12 17:07:13,783 p=5860 u=root | TASK [etcd : set etcd host and ip facts] **************************************************************************************************************************************************************************************************** >2018-06-12 17:07:13,783 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/set_facts.yml:2 >2018-06-12 17:07:13,799 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:13,806 p=5860 u=root | TASK [etcd : Add iptables allow rules] ****************************************************************************************************************************************************************************************************** >2018-06-12 17:07:13,806 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/firewall.yml:4 >2018-06-12 17:07:13,831 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={u'port': u'2379/tcp', u'service': u'etcd'}) => { > "changed": false, > "item": { > "port": "2379/tcp", > "service": "etcd" > }, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:13,835 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={u'port': u'2380/tcp', u'service': u'etcd peering'}) => { > "changed": false, > "item": { > "port": "2380/tcp", > "service": "etcd peering" > }, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:13,842 p=5860 u=root | TASK [etcd : Remove iptables rules] ********************************************************************************************************************************************************************************************************* >2018-06-12 17:07:13,842 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/firewall.yml:13 >2018-06-12 17:07:13,863 p=5860 u=root | TASK [etcd : Add firewalld allow rules] ***************************************************************************************************************************************************************************************************** >2018-06-12 17:07:13,863 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/firewall.yml:24 >2018-06-12 17:07:13,886 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={u'port': u'2379/tcp', u'service': u'etcd'}) => { > "changed": false, > "item": { > "port": "2379/tcp", > "service": "etcd" > }, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:13,892 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={u'port': u'2380/tcp', u'service': u'etcd peering'}) => { > "changed": false, > "item": { > "port": "2380/tcp", > "service": "etcd peering" > }, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:13,901 p=5860 u=root | TASK [etcd : Remove firewalld allow rules] ************************************************************************************************************************************************************************************************** >2018-06-12 17:07:13,901 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/firewall.yml:33 >2018-06-12 17:07:13,922 p=5860 u=root | TASK [etcd : Install etcd] ****************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:13,922 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/rpm.yml:8 >2018-06-12 17:07:13,937 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:13,944 p=5860 u=root | TASK [etcd : include_tasks] ***************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:13,944 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/rpm.yml:13 >2018-06-12 17:07:13,960 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:13,967 p=5860 u=root | TASK [etcd : Create configuration directory] ************************************************************************************************************************************************************************************************ >2018-06-12 17:07:13,967 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/rpm.yml:20 >2018-06-12 17:07:13,985 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:13,993 p=5860 u=root | TASK [etcd : Copy service file for etcd instance] ******************************************************************************************************************************************************************************************* >2018-06-12 17:07:13,993 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/rpm.yml:27 >2018-06-12 17:07:14,011 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:14,019 p=5860 u=root | TASK [etcd : Create third party etcd service.d directory exists] **************************************************************************************************************************************************************************** >2018-06-12 17:07:14,019 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/rpm.yml:33 >2018-06-12 17:07:14,033 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:14,041 p=5860 u=root | TASK [etcd : Configure third part etcd service unit file] *********************************************************************************************************************************************************************************** >2018-06-12 17:07:14,041 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/rpm.yml:38 >2018-06-12 17:07:14,056 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:14,064 p=5860 u=root | TASK [etcd : Ensure etcd datadir ownership for thirdparty datadir] ************************************************************************************************************************************************************************** >2018-06-12 17:07:14,064 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/rpm.yml:44 >2018-06-12 17:07:14,081 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:14,089 p=5860 u=root | TASK [etcd : Write etcd global config file] ************************************************************************************************************************************************************************************************* >2018-06-12 17:07:14,089 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/rpm.yml:54 >2018-06-12 17:07:14,106 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:14,116 p=5860 u=root | TASK [etcd : Ensure etcd owns the files in it's config dir] ********************************************************************************************************************************************************************************* >2018-06-12 17:07:14,116 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/rpm.yml:62 >2018-06-12 17:07:14,133 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:14,140 p=5860 u=root | TASK [etcd : Enable etcd] ******************************************************************************************************************************************************************************************************************* >2018-06-12 17:07:14,140 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/rpm.yml:65 >2018-06-12 17:07:14,155 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:14,162 p=5860 u=root | TASK [etcd : Set fact etcd_service_status_changed] ****************************************************************************************************************************************************************************************** >2018-06-12 17:07:14,162 p=5860 u=root | task path: /root/openshift-ansible/roles/etcd/tasks/rpm.yml:73 >2018-06-12 17:07:14,176 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:14,183 p=5860 u=root | TASK [nickhammond.logrotate : nickhammond.logrotate | Install logrotate] ******************************************************************************************************************************************************************** >2018-06-12 17:07:14,183 p=5860 u=root | task path: /root/openshift-ansible/roles/nickhammond.logrotate/tasks/main.yml:2 >2018-06-12 17:07:14,212 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py >2018-06-12 17:07:14,576 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "attempts": 1, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "allow_downgrade": false, > "conf_file": null, > "disable_gpg_check": false, > "disablerepo": null, > "enablerepo": null, > "exclude": null, > "install_repoquery": true, > "installroot": "/", > "list": null, > "name": [ > "logrotate" > ], > "security": false, > "skip_broken": false, > "state": "present", > "update_cache": false, > "validate_certs": true > } > }, > "msg": "", > "rc": 0, > "results": [ > "logrotate-3.8.6-15.el7.x86_64 providing logrotate is already installed" > ] >} >2018-06-12 17:07:14,584 p=5860 u=root | TASK [nickhammond.logrotate : nickhammond.logrotate | Setup logrotate.d scripts] ************************************************************************************************************************************************************ >2018-06-12 17:07:14,584 p=5860 u=root | task path: /root/openshift-ansible/roles/nickhammond.logrotate/tasks/main.yml:8 >2018-06-12 17:07:14,596 p=5860 u=root | META: ran handlers >2018-06-12 17:07:14,597 p=5860 u=root | META: ran handlers >2018-06-12 17:07:14,601 p=5860 u=root | PLAY [etcd Install Checkpoint End] ********************************************************************************************************************************************************************************************************** >2018-06-12 17:07:14,603 p=5860 u=root | META: ran handlers >2018-06-12 17:07:14,609 p=5860 u=root | TASK [Set etcd install 'Complete'] ********************************************************************************************************************************************************************************************************** >2018-06-12 17:07:14,609 p=5860 u=root | task path: /root/openshift-ansible/playbooks/openshift-etcd/private/config.yml:56 >2018-06-12 17:07:14,646 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_stats": { > "aggregate": true, > "data": { > "installer_phase_etcd": { > "end": "20180612170714Z", > "status": "Complete" > } > }, > "per_host": false > }, > "changed": false, > "failed": false >} >2018-06-12 17:07:14,647 p=5860 u=root | META: ran handlers >2018-06-12 17:07:14,648 p=5860 u=root | META: ran handlers >2018-06-12 17:07:14,651 p=5860 u=root | PLAY [NFS Install Checkpoint Start] ********************************************************************************************************************************************************************************************************* >2018-06-12 17:07:14,653 p=5860 u=root | META: ran handlers >2018-06-12 17:07:14,659 p=5860 u=root | TASK [Set NFS install 'In Progress'] ******************************************************************************************************************************************************************************************************** >2018-06-12 17:07:14,660 p=5860 u=root | task path: /root/openshift-ansible/playbooks/openshift-nfs/private/config.yml:6 >2018-06-12 17:07:14,674 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:14,674 p=5860 u=root | META: ran handlers >2018-06-12 17:07:14,675 p=5860 u=root | META: ran handlers >2018-06-12 17:07:14,678 p=5860 u=root | PLAY [Configure nfs] ************************************************************************************************************************************************************************************************************************ >2018-06-12 17:07:14,678 p=5860 u=root | skipping: no hosts matched >2018-06-12 17:07:14,681 p=5860 u=root | PLAY [NFS Install Checkpoint End] *********************************************************************************************************************************************************************************************************** >2018-06-12 17:07:14,683 p=5860 u=root | META: ran handlers >2018-06-12 17:07:14,689 p=5860 u=root | TASK [Set NFS install 'Complete'] *********************************************************************************************************************************************************************************************************** >2018-06-12 17:07:14,689 p=5860 u=root | task path: /root/openshift-ansible/playbooks/openshift-nfs/private/config.yml:25 >2018-06-12 17:07:14,703 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:14,704 p=5860 u=root | META: ran handlers >2018-06-12 17:07:14,704 p=5860 u=root | META: ran handlers >2018-06-12 17:07:14,708 p=5860 u=root | PLAY [Load Balancer Install Checkpoint Start] *********************************************************************************************************************************************************************************************** >2018-06-12 17:07:14,710 p=5860 u=root | META: ran handlers >2018-06-12 17:07:14,716 p=5860 u=root | TASK [Set load balancer install 'In Progress'] ********************************************************************************************************************************************************************************************** >2018-06-12 17:07:14,716 p=5860 u=root | task path: /root/openshift-ansible/playbooks/openshift-loadbalancer/private/config.yml:6 >2018-06-12 17:07:14,730 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:14,731 p=5860 u=root | META: ran handlers >2018-06-12 17:07:14,731 p=5860 u=root | META: ran handlers >2018-06-12 17:07:14,734 p=5860 u=root | PLAY [Configure load balancers] ************************************************************************************************************************************************************************************************************* >2018-06-12 17:07:14,734 p=5860 u=root | skipping: no hosts matched >2018-06-12 17:07:14,737 p=5860 u=root | PLAY [Load Balancer Install Checkpoint End] ************************************************************************************************************************************************************************************************* >2018-06-12 17:07:14,739 p=5860 u=root | META: ran handlers >2018-06-12 17:07:14,746 p=5860 u=root | TASK [Set load balancer install 'Complete'] ************************************************************************************************************************************************************************************************* >2018-06-12 17:07:14,746 p=5860 u=root | task path: /root/openshift-ansible/playbooks/openshift-loadbalancer/private/config.yml:27 >2018-06-12 17:07:14,759 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:14,760 p=5860 u=root | META: ran handlers >2018-06-12 17:07:14,760 p=5860 u=root | META: ran handlers >2018-06-12 17:07:14,764 p=5860 u=root | PLAY [Master Install Checkpoint Start] ****************************************************************************************************************************************************************************************************** >2018-06-12 17:07:14,766 p=5860 u=root | META: ran handlers >2018-06-12 17:07:14,772 p=5860 u=root | TASK [Set Master install 'In Progress'] ***************************************************************************************************************************************************************************************************** >2018-06-12 17:07:14,772 p=5860 u=root | task path: /root/openshift-ansible/playbooks/openshift-master/private/config.yml:6 >2018-06-12 17:07:14,806 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_stats": { > "aggregate": true, > "data": { > "installer_phase_master": { > "playbook": "playbooks/openshift-master/config.yml", > "start": "20180612170714Z", > "status": "In Progress", > "title": "Master Install" > } > }, > "per_host": false > }, > "changed": false, > "failed": false >} >2018-06-12 17:07:14,808 p=5860 u=root | META: ran handlers >2018-06-12 17:07:14,808 p=5860 u=root | META: ran handlers >2018-06-12 17:07:14,812 p=5860 u=root | PLAY [Create OpenShift certificates for master hosts] *************************************************************************************************************************************************************************************** >2018-06-12 17:07:14,829 p=5860 u=root | TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:14,855 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:07:15,248 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] >2018-06-12 17:07:15,266 p=5860 u=root | META: ran handlers >2018-06-12 17:07:15,273 p=5860 u=root | TASK [openshift_master_facts : Verify required variables are set] *************************************************************************************************************************************************************************** >2018-06-12 17:07:15,273 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:2 >2018-06-12 17:07:15,287 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:15,295 p=5860 u=root | TASK [openshift_master_facts : Set g_metrics_hostname] ************************************************************************************************************************************************************************************** >2018-06-12 17:07:15,295 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:14 >2018-06-12 17:07:15,323 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "g_metrics_hostname": "hawkular-metrics.apps.0612-g-9.qe.rhcloud.com" > }, > "changed": false, > "failed": false >} >2018-06-12 17:07:15,330 p=5860 u=root | TASK [openshift_master_facts : set_fact] **************************************************************************************************************************************************************************************************** >2018-06-12 17:07:15,331 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:20 >2018-06-12 17:07:15,345 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:15,352 p=5860 u=root | TASK [openshift_master_facts : Set master facts] ******************************************************************************************************************************************************************************************** >2018-06-12 17:07:15,352 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:24 >2018-06-12 17:07:15,399 p=5860 u=root | Using module file /root/openshift-ansible/roles/openshift_facts/library/openshift_facts.py >2018-06-12 17:07:15,912 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift": { > "builddefaults": { > "config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > } > } > }, > "buildoverrides": { > "config": { > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > } > } > }, > "cloudprovider": { > "kind": "aws" > }, > "common": { > "all_hostnames": [ > "kubernetes.default", > "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "54.186.168.249", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "config_base": "/etc/origin", > "dns_domain": "cluster.local", > "generate_no_proxy_hosts": true, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "internal_hostnames": [ > "kubernetes.default", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "ip": "172.31.50.118", > "kube_svc_ip": "172.24.0.1", > "no_proxy_etcd_host_ips": "172.31.50.118", > "portal_net": "172.24.0.0/14", > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249", > "raw_hostname": "ip-172-31-50-118.us-west-2.compute.internal" > }, > "current_config": { > "roles": [ > "node", > "builddefaults", > "cloudprovider", > "master", > "buildoverrides" > ] > }, > "master": { > "admission_plugin_config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > }, > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > }, > "openshift.io/ImagePolicy": { > "configuration": { > "apiVersion": "v1", > "executionRules": [ > { > "matchImageAnnotations": [ > { > "key": "images.openshift.io/deny-execution", > "value": "true" > } > ], > "name": "execution-denied", > "onResources": [ > { > "resource": "pods" > }, > { > "resource": "builds" > } > ], > "reject": true, > "skipOnResolutionFailure": true > } > ], > "kind": "ImagePolicyConfig" > } > } > }, > "api_port": "8443", > "api_server_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ] > }, > "api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "api_use_ssl": true, > "bind_addr": "0.0.0.0", > "console_path": "/console", > "console_port": "8443", > "console_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443/console", > "console_use_ssl": true, > "controller_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ], > "disable-attach-detach-reconcile-sync": [ > "true" > ] > }, > "controllers_port": "8444", > "loopback_api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "loopback_cluster_name": "ip-172-31-50-118-us-west-2-compute-internal:8443", > "loopback_context_name": "default/ip-172-31-50-118-us-west-2-compute-internal:8443/system:openshift-master", > "loopback_user": "system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443", > "named_certificates": [], > "portal_net": "172.30.0.0/16", > "public_api_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443", > "public_console_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443/console", > "registry_selector": "node-role.kubernetes.io/infra=true", > "registry_url": "registry.reg-aws.openshift.com:443/openshift3/ose-${component}:${version}", > "session_max_seconds": 3600, > "session_name": "ssn" > }, > "node": { > "bootstrapped": true, > "nodename": "ip-172-31-50-118.us-west-2.compute.internal", > "sdn_mtu": "8951" > }, > "provider": { > "metadata": { > "ami-id": "ami-f1064589", > "ami-launch-index": "0", > "ami-manifest-path": "(unknown)", > "block-device-mapping": { > "ami": "sda1", > "ebs1": "sdb", > "root": "/dev/sda1" > }, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "instance-action": "none", > "instance-id": "i-0936de393175df6ba", > "instance-type": "m5.xlarge", > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "metrics": { > "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" > }, > "network": { > "interfaces": { > "macs": { > "02:42:24:b2:8d:1a": { > "device-number": "0", > "interface-id": "eni-e0ac6b0a", > "ipv4-associations": { > "54.186.168.249": "172.31.50.118" > }, > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4s": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "owner-id": "925374498059", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4s": "54.186.168.249", > "security-group-ids": "sg-5c5ace38", > "security-groups": "default", > "subnet-id": "subnet-4879292d", > "subnet-ipv4-cidr-block": "172.31.0.0/18", > "vpc-id": "vpc-33b5f656", > "vpc-ipv4-cidr-block": "172.31.0.0/16", > "vpc-ipv4-cidr-blocks": "172.31.0.0/16" > } > } > } > }, > "placement": { > "availability-zone": "us-west-2b" > }, > "profile": "hvm", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4": "54.186.168.249", > "public-keys/": "0=libra", > "reservation-id": "r-09891556570a1d8a4", > "security-groups": "default", > "services": { > "domain": "amazonaws.com", > "partition": "aws" > } > }, > "name": "aws", > "network": { > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "interfaces": [ > { > "ips": [ > "172.31.50.118" > ], > "network_id": "subnet-4879292d", > "network_type": "vpc", > "public_ips": [ > "54.186.168.249" > ] > } > ], > "ip": "172.31.50.118", > "ipv6_enabled": false, > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249" > }, > "zone": "us-west-2b" > } > } > }, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "additive_facts_to_overwrite": [], > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "directory_mode": null, > "filter": "*", > "follow": false, > "force": null, > "gather_subset": [ > "hardware", > "network", > "virtual", > "facter" > ], > "gather_timeout": 10, > "group": null, > "local_facts": { > "admission_plugin_config": { > "openshift.io/ImagePolicy": { > "configuration": { > "apiVersion": "v1", > "executionRules": [ > { > "matchImageAnnotations": [ > { > "key": "images.openshift.io/deny-execution", > "value": "true" > } > ], > "name": "execution-denied", > "onResources": [ > { > "resource": "pods" > }, > { > "resource": "builds" > } > ], > "reject": true, > "skipOnResolutionFailure": true > } > ], > "kind": "ImagePolicyConfig" > } > } > }, > "api_port": "8443", > "api_server_args": "", > "api_url": "", > "api_use_ssl": "", > "bind_addr": "", > "cluster_hostname": "", > "cluster_public_hostname": "", > "console_path": "", > "console_port": "", > "console_url": "", > "console_use_ssl": "", > "controller_args": "", > "disabled_features": "", > "image_policy_config": "", > "kube_admission_plugin_config": "", > "ldap_ca": "", > "logging_public_url": "", > "logout_url": "", > "openid_ca": "", > "public_api_url": "", > "public_console_url": "", > "registry_selector": "node-role.kubernetes.io/infra=true", > "registry_url": "registry.reg-aws.openshift.com:443/openshift3/ose-${component}:${version}", > "session_max_seconds": "", > "session_name": "" > }, > "mode": null, > "owner": null, > "regexp": null, > "remote_src": null, > "role": "master", > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "unsafe_writes": null > } > } >} >2018-06-12 17:07:15,925 p=5860 u=root | TASK [openshift_master_facts : Determine if scheduler config present] *********************************************************************************************************************************************************************** >2018-06-12 17:07:15,925 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:55 >2018-06-12 17:07:15,953 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:16,163 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/scheduler.json" > } > }, > "stat": { > "atime": 1528820640.9172165, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "2953b4037c413a5acf89bf5ef868119b889f6fa4", > "ctime": 1528820640.9192166, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 71303256, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "7679d40705888fa1f567ec8d3ff89dad", > "mimetype": "text/plain", > "mode": "0644", > "mtime": 1528820640.170217, > "nlink": 1, > "path": "/etc/origin/master/scheduler.json", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 1923, > "uid": 0, > "version": "18446744072291978203", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:16,172 p=5860 u=root | TASK [openshift_master_facts : Set Default scheduler predicates and priorities] ************************************************************************************************************************************************************* >2018-06-12 17:07:16,172 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:60 >2018-06-12 17:07:16,207 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_master_scheduler_default_predicates": [ > { > "name": "NoVolumeZoneConflict" > }, > { > "name": "MaxEBSVolumeCount" > }, > { > "name": "MaxGCEPDVolumeCount" > }, > { > "name": "MaxAzureDiskVolumeCount" > }, > { > "name": "MatchInterPodAffinity" > }, > { > "name": "NoDiskConflict" > }, > { > "name": "GeneralPredicates" > }, > { > "name": "PodToleratesNodeTaints" > }, > { > "name": "CheckNodeMemoryPressure" > }, > { > "name": "CheckNodeDiskPressure" > }, > { > "name": "CheckVolumeBinding" > }, > { > "argument": { > "serviceAffinity": { > "labels": [ > "region" > ] > } > }, > "name": "Region" > } > ], > "openshift_master_scheduler_default_priorities": [ > { > "name": "SelectorSpreadPriority", > "weight": 1 > }, > { > "name": "InterPodAffinityPriority", > "weight": 1 > }, > { > "name": "LeastRequestedPriority", > "weight": 1 > }, > { > "name": "BalancedResourceAllocation", > "weight": 1 > }, > { > "name": "NodePreferAvoidPodsPriority", > "weight": 10000 > }, > { > "name": "NodeAffinityPriority", > "weight": 1 > }, > { > "name": "TaintTolerationPriority", > "weight": 1 > }, > { > "argument": { > "serviceAntiAffinity": { > "label": "zone" > } > }, > "name": "Zone", > "weight": 2 > } > ] > }, > "changed": false, > "failed": false >} >2018-06-12 17:07:16,215 p=5860 u=root | TASK [openshift_master_facts : Retrieve current scheduler config] *************************************************************************************************************************************************************************** >2018-06-12 17:07:16,216 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:68 >2018-06-12 17:07:16,244 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/net_tools/basics/slurp.py >2018-06-12 17:07:16,438 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "content": "ewogICAgImFwaVZlcnNpb24iOiAidjEiLCAKICAgICJraW5kIjogIlBvbGljeSIsIAogICAgInByZWRpY2F0ZXMiOiBbCiAgICAgICAgewogICAgICAgICAgICAibmFtZSI6ICJOb1ZvbHVtZVpvbmVDb25mbGljdCIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIk1heEVCU1ZvbHVtZUNvdW50IgogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgIm5hbWUiOiAiTWF4R0NFUERWb2x1bWVDb3VudCIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIk1heEF6dXJlRGlza1ZvbHVtZUNvdW50IgogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgIm5hbWUiOiAiTWF0Y2hJbnRlclBvZEFmZmluaXR5IgogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgIm5hbWUiOiAiTm9EaXNrQ29uZmxpY3QiCiAgICAgICAgfSwgCiAgICAgICAgewogICAgICAgICAgICAibmFtZSI6ICJHZW5lcmFsUHJlZGljYXRlcyIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIlBvZFRvbGVyYXRlc05vZGVUYWludHMiCiAgICAgICAgfSwgCiAgICAgICAgewogICAgICAgICAgICAibmFtZSI6ICJDaGVja05vZGVNZW1vcnlQcmVzc3VyZSIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIkNoZWNrTm9kZURpc2tQcmVzc3VyZSIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIkNoZWNrVm9sdW1lQmluZGluZyIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJhcmd1bWVudCI6IHsKICAgICAgICAgICAgICAgICJzZXJ2aWNlQWZmaW5pdHkiOiB7CiAgICAgICAgICAgICAgICAgICAgImxhYmVscyI6IFsKICAgICAgICAgICAgICAgICAgICAgICAgInJlZ2lvbiIKICAgICAgICAgICAgICAgICAgICBdCiAgICAgICAgICAgICAgICB9CiAgICAgICAgICAgIH0sIAogICAgICAgICAgICAibmFtZSI6ICJSZWdpb24iCiAgICAgICAgfQogICAgXSwgCiAgICAicHJpb3JpdGllcyI6IFsKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIlNlbGVjdG9yU3ByZWFkUHJpb3JpdHkiLCAKICAgICAgICAgICAgIndlaWdodCI6IDEKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIkludGVyUG9kQWZmaW5pdHlQcmlvcml0eSIsIAogICAgICAgICAgICAid2VpZ2h0IjogMQogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgIm5hbWUiOiAiTGVhc3RSZXF1ZXN0ZWRQcmlvcml0eSIsIAogICAgICAgICAgICAid2VpZ2h0IjogMQogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgIm5hbWUiOiAiQmFsYW5jZWRSZXNvdXJjZUFsbG9jYXRpb24iLCAKICAgICAgICAgICAgIndlaWdodCI6IDEKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIk5vZGVQcmVmZXJBdm9pZFBvZHNQcmlvcml0eSIsIAogICAgICAgICAgICAid2VpZ2h0IjogMTAwMDAKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIk5vZGVBZmZpbml0eVByaW9yaXR5IiwgCiAgICAgICAgICAgICJ3ZWlnaHQiOiAxCiAgICAgICAgfSwgCiAgICAgICAgewogICAgICAgICAgICAibmFtZSI6ICJUYWludFRvbGVyYXRpb25Qcmlvcml0eSIsIAogICAgICAgICAgICAid2VpZ2h0IjogMQogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgImFyZ3VtZW50IjogewogICAgICAgICAgICAgICAgInNlcnZpY2VBbnRpQWZmaW5pdHkiOiB7CiAgICAgICAgICAgICAgICAgICAgImxhYmVsIjogInpvbmUiCiAgICAgICAgICAgICAgICB9CiAgICAgICAgICAgIH0sIAogICAgICAgICAgICAibmFtZSI6ICJab25lIiwgCiAgICAgICAgICAgICJ3ZWlnaHQiOiAyCiAgICAgICAgfQogICAgXQp9", > "encoding": "base64", > "failed": false, > "invocation": { > "module_args": { > "src": "/etc/origin/master/scheduler.json" > } > }, > "source": "/etc/origin/master/scheduler.json" >} >2018-06-12 17:07:16,446 p=5860 u=root | TASK [openshift_master_facts : Set openshift_master_scheduler_current_config] *************************************************************************************************************************************************************** >2018-06-12 17:07:16,446 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:73 >2018-06-12 17:07:16,479 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_master_scheduler_current_config": { > "apiVersion": "v1", > "kind": "Policy", > "predicates": [ > { > "name": "NoVolumeZoneConflict" > }, > { > "name": "MaxEBSVolumeCount" > }, > { > "name": "MaxGCEPDVolumeCount" > }, > { > "name": "MaxAzureDiskVolumeCount" > }, > { > "name": "MatchInterPodAffinity" > }, > { > "name": "NoDiskConflict" > }, > { > "name": "GeneralPredicates" > }, > { > "name": "PodToleratesNodeTaints" > }, > { > "name": "CheckNodeMemoryPressure" > }, > { > "name": "CheckNodeDiskPressure" > }, > { > "name": "CheckVolumeBinding" > }, > { > "argument": { > "serviceAffinity": { > "labels": [ > "region" > ] > } > }, > "name": "Region" > } > ], > "priorities": [ > { > "name": "SelectorSpreadPriority", > "weight": 1 > }, > { > "name": "InterPodAffinityPriority", > "weight": 1 > }, > { > "name": "LeastRequestedPriority", > "weight": 1 > }, > { > "name": "BalancedResourceAllocation", > "weight": 1 > }, > { > "name": "NodePreferAvoidPodsPriority", > "weight": 10000 > }, > { > "name": "NodeAffinityPriority", > "weight": 1 > }, > { > "name": "TaintTolerationPriority", > "weight": 1 > }, > { > "argument": { > "serviceAntiAffinity": { > "label": "zone" > } > }, > "name": "Zone", > "weight": 2 > } > ] > } > }, > "changed": false, > "failed": false >} >2018-06-12 17:07:16,488 p=5860 u=root | TASK [openshift_master_facts : Test if scheduler config is readable] ************************************************************************************************************************************************************************ >2018-06-12 17:07:16,488 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:77 >2018-06-12 17:07:16,505 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:16,512 p=5860 u=root | TASK [openshift_master_facts : Set current scheduler predicates and priorities] ************************************************************************************************************************************************************* >2018-06-12 17:07:16,512 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:82 >2018-06-12 17:07:16,546 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_master_scheduler_current_predicates": [ > { > "name": "NoVolumeZoneConflict" > }, > { > "name": "MaxEBSVolumeCount" > }, > { > "name": "MaxGCEPDVolumeCount" > }, > { > "name": "MaxAzureDiskVolumeCount" > }, > { > "name": "MatchInterPodAffinity" > }, > { > "name": "NoDiskConflict" > }, > { > "name": "GeneralPredicates" > }, > { > "name": "PodToleratesNodeTaints" > }, > { > "name": "CheckNodeMemoryPressure" > }, > { > "name": "CheckNodeDiskPressure" > }, > { > "name": "CheckVolumeBinding" > }, > { > "argument": { > "serviceAffinity": { > "labels": [ > "region" > ] > } > }, > "name": "Region" > } > ], > "openshift_master_scheduler_current_priorities": [ > { > "name": "SelectorSpreadPriority", > "weight": 1 > }, > { > "name": "InterPodAffinityPriority", > "weight": 1 > }, > { > "name": "LeastRequestedPriority", > "weight": 1 > }, > { > "name": "BalancedResourceAllocation", > "weight": 1 > }, > { > "name": "NodePreferAvoidPodsPriority", > "weight": 10000 > }, > { > "name": "NodeAffinityPriority", > "weight": 1 > }, > { > "name": "TaintTolerationPriority", > "weight": 1 > }, > { > "argument": { > "serviceAntiAffinity": { > "label": "zone" > } > }, > "name": "Zone", > "weight": 2 > } > ] > }, > "changed": false, > "failed": false >} >2018-06-12 17:07:16,559 p=5860 u=root | TASK [openshift_named_certificates : set_fact] ********************************************************************************************************************************************************************************************** >2018-06-12 17:07:16,560 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_named_certificates/tasks/main.yml:2 >2018-06-12 17:07:16,575 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:16,582 p=5860 u=root | TASK [openshift_named_certificates : openshift_facts] *************************************************************************************************************************************************************************************** >2018-06-12 17:07:16,582 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_named_certificates/tasks/main.yml:8 >2018-06-12 17:07:16,613 p=5860 u=root | Using module file /root/openshift-ansible/roles/openshift_facts/library/openshift_facts.py >2018-06-12 17:07:17,118 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift": { > "builddefaults": { > "config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > } > } > }, > "buildoverrides": { > "config": { > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > } > } > }, > "cloudprovider": { > "kind": "aws" > }, > "common": { > "all_hostnames": [ > "kubernetes.default", > "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "54.186.168.249", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "config_base": "/etc/origin", > "dns_domain": "cluster.local", > "generate_no_proxy_hosts": true, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "internal_hostnames": [ > "kubernetes.default", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "ip": "172.31.50.118", > "kube_svc_ip": "172.24.0.1", > "no_proxy_etcd_host_ips": "172.31.50.118", > "portal_net": "172.24.0.0/14", > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249", > "raw_hostname": "ip-172-31-50-118.us-west-2.compute.internal" > }, > "current_config": { > "roles": [ > "node", > "builddefaults", > "cloudprovider", > "master", > "buildoverrides" > ] > }, > "master": { > "admission_plugin_config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > }, > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > }, > "openshift.io/ImagePolicy": { > "configuration": { > "apiVersion": "v1", > "executionRules": [ > { > "matchImageAnnotations": [ > { > "key": "images.openshift.io/deny-execution", > "value": "true" > } > ], > "name": "execution-denied", > "onResources": [ > { > "resource": "pods" > }, > { > "resource": "builds" > } > ], > "reject": true, > "skipOnResolutionFailure": true > } > ], > "kind": "ImagePolicyConfig" > } > } > }, > "api_port": "8443", > "api_server_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ] > }, > "api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "api_use_ssl": true, > "bind_addr": "0.0.0.0", > "console_path": "/console", > "console_port": "8443", > "console_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443/console", > "console_use_ssl": true, > "controller_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ], > "disable-attach-detach-reconcile-sync": [ > "true" > ] > }, > "controllers_port": "8444", > "loopback_api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "loopback_cluster_name": "ip-172-31-50-118-us-west-2-compute-internal:8443", > "loopback_context_name": "default/ip-172-31-50-118-us-west-2-compute-internal:8443/system:openshift-master", > "loopback_user": "system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443", > "named_certificates": [], > "portal_net": "172.30.0.0/16", > "public_api_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443", > "public_console_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443/console", > "registry_selector": "node-role.kubernetes.io/infra=true", > "registry_url": "registry.reg-aws.openshift.com:443/openshift3/ose-${component}:${version}", > "session_max_seconds": 3600, > "session_name": "ssn" > }, > "node": { > "bootstrapped": true, > "nodename": "ip-172-31-50-118.us-west-2.compute.internal", > "sdn_mtu": "8951" > }, > "provider": { > "metadata": { > "ami-id": "ami-f1064589", > "ami-launch-index": "0", > "ami-manifest-path": "(unknown)", > "block-device-mapping": { > "ami": "sda1", > "ebs1": "sdb", > "root": "/dev/sda1" > }, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "instance-action": "none", > "instance-id": "i-0936de393175df6ba", > "instance-type": "m5.xlarge", > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "metrics": { > "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" > }, > "network": { > "interfaces": { > "macs": { > "02:42:24:b2:8d:1a": { > "device-number": "0", > "interface-id": "eni-e0ac6b0a", > "ipv4-associations": { > "54.186.168.249": "172.31.50.118" > }, > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4s": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "owner-id": "925374498059", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4s": "54.186.168.249", > "security-group-ids": "sg-5c5ace38", > "security-groups": "default", > "subnet-id": "subnet-4879292d", > "subnet-ipv4-cidr-block": "172.31.0.0/18", > "vpc-id": "vpc-33b5f656", > "vpc-ipv4-cidr-block": "172.31.0.0/16", > "vpc-ipv4-cidr-blocks": "172.31.0.0/16" > } > } > } > }, > "placement": { > "availability-zone": "us-west-2b" > }, > "profile": "hvm", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4": "54.186.168.249", > "public-keys/": "0=libra", > "reservation-id": "r-09891556570a1d8a4", > "security-groups": "default", > "services": { > "domain": "amazonaws.com", > "partition": "aws" > } > }, > "name": "aws", > "network": { > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "interfaces": [ > { > "ips": [ > "172.31.50.118" > ], > "network_id": "subnet-4879292d", > "network_type": "vpc", > "public_ips": [ > "54.186.168.249" > ] > } > ], > "ip": "172.31.50.118", > "ipv6_enabled": false, > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249" > }, > "zone": "us-west-2b" > } > } > }, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "additive_facts_to_overwrite": [ > "__omit_place_holder__8878f1724e43fb6d46d85d292027cb10b064f56c" > ], > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "directory_mode": null, > "filter": "*", > "follow": false, > "force": null, > "gather_subset": [ > "hardware", > "network", > "virtual", > "facter" > ], > "gather_timeout": 10, > "group": null, > "local_facts": { > "named_certificates": [] > }, > "mode": null, > "owner": null, > "regexp": null, > "remote_src": null, > "role": "master", > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "unsafe_writes": null > } > } >} >2018-06-12 17:07:17,133 p=5860 u=root | TASK [openshift_named_certificates : Clear named certificates] ****************************************************************************************************************************************************************************** >2018-06-12 17:07:17,133 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_named_certificates/tasks/main.yml:15 >2018-06-12 17:07:17,148 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:17,156 p=5860 u=root | TASK [openshift_named_certificates : Ensure named certificate directory exists] ************************************************************************************************************************************************************* >2018-06-12 17:07:17,156 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_named_certificates/tasks/main.yml:21 >2018-06-12 17:07:17,183 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:17,384 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "diff": { > "after": { > "path": "/etc/origin/master/named_certificates/" > }, > "before": { > "path": "/etc/origin/master/named_certificates/" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": 448, > "original_basename": null, > "owner": null, > "path": "/etc/origin/master/named_certificates/", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "directory", > "unsafe_writes": null, > "validate": null > } > }, > "mode": "0700", > "owner": "root", > "path": "/etc/origin/master/named_certificates/", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 6, > "state": "directory", > "uid": 0 >} >2018-06-12 17:07:17,392 p=5860 u=root | TASK [openshift_named_certificates : Land named certificates] ******************************************************************************************************************************************************************************* >2018-06-12 17:07:17,392 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_named_certificates/tasks/main.yml:27 >2018-06-12 17:07:17,413 p=5860 u=root | TASK [openshift_named_certificates : Land named certificate keys] *************************************************************************************************************************************************************************** >2018-06-12 17:07:17,414 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_named_certificates/tasks/main.yml:33 >2018-06-12 17:07:17,435 p=5860 u=root | TASK [openshift_named_certificates : Land named CA certificates] **************************************************************************************************************************************************************************** >2018-06-12 17:07:17,435 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_named_certificates/tasks/main.yml:40 >2018-06-12 17:07:17,457 p=5860 u=root | TASK [openshift_cli : Install clients] ****************************************************************************************************************************************************************************************************** >2018-06-12 17:07:17,457 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_cli/tasks/main.yml:2 >2018-06-12 17:07:17,487 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py >2018-06-12 17:07:19,621 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "attempts": 1, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "allow_downgrade": false, > "conf_file": null, > "disable_gpg_check": false, > "disablerepo": null, > "enablerepo": null, > "exclude": null, > "install_repoquery": true, > "installroot": "/", > "list": null, > "name": [ > "atomic-openshift-clients-3.10.0*" > ], > "security": false, > "skip_broken": false, > "state": "present", > "update_cache": false, > "validate_certs": true > } > }, > "msg": "", > "rc": 0, > "results": [ > "atomic-openshift-clients-3.10.0-0.66.0.git.0.c9a4e2b.el7.x86_64 providing atomic-openshift-clients-3.10.0* is already installed" > ] >} >2018-06-12 17:07:19,629 p=5860 u=root | TASK [openshift_cli : Pull CLI Image (docker)] ********************************************************************************************************************************************************************************************** >2018-06-12 17:07:19,629 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_cli/tasks/main.yml:9 >2018-06-12 17:07:19,643 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:19,652 p=5860 u=root | TASK [openshift_cli : Pull CLI Image (atomic)] ********************************************************************************************************************************************************************************************** >2018-06-12 17:07:19,652 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_cli/tasks/main.yml:14 >2018-06-12 17:07:19,666 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:19,674 p=5860 u=root | TASK [openshift_cli : Copy client binaries/symlinks out of CLI image for use on the host] *************************************************************************************************************************************************** >2018-06-12 17:07:19,674 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_cli/tasks/main.yml:22 >2018-06-12 17:07:19,689 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:19,696 p=5860 u=root | TASK [openshift_cli : Install bash completion for oc tools] ********************************************************************************************************************************************************************************* >2018-06-12 17:07:19,696 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_cli/tasks/main.yml:28 >2018-06-12 17:07:19,727 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py >2018-06-12 17:07:20,115 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "attempts": 1, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "allow_downgrade": false, > "conf_file": null, > "disable_gpg_check": false, > "disablerepo": null, > "enablerepo": null, > "exclude": null, > "install_repoquery": true, > "installroot": "/", > "list": null, > "name": [ > "bash-completion" > ], > "security": false, > "skip_broken": false, > "state": "present", > "update_cache": false, > "validate_certs": true > } > }, > "msg": "", > "rc": 0, > "results": [ > "1:bash-completion-2.1-6.el7.noarch providing bash-completion is already installed" > ] >} >2018-06-12 17:07:20,123 p=5860 u=root | TASK [openshift_cli : Ensure binaries from containerized deployments are cleaned up.] ******************************************************************************************************************************************************* >2018-06-12 17:07:20,123 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_cli/tasks/main.yml:35 >2018-06-12 17:07:20,154 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:20,354 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/usr/local/bin/oc) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": null, > "original_basename": null, > "owner": null, > "path": "/usr/local/bin/oc", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "absent", > "unsafe_writes": null, > "validate": null > } > }, > "item": "/usr/local/bin/oc", > "path": "/usr/local/bin/oc", > "state": "absent" >} >2018-06-12 17:07:20,446 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:20,644 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/usr/local/bin/openshift) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": null, > "original_basename": null, > "owner": null, > "path": "/usr/local/bin/openshift", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "absent", > "unsafe_writes": null, > "validate": null > } > }, > "item": "/usr/local/bin/openshift", > "path": "/usr/local/bin/openshift", > "state": "absent" >} >2018-06-12 17:07:20,661 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:20,865 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/usr/local/bin/kubectl) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": null, > "original_basename": null, > "owner": null, > "path": "/usr/local/bin/kubectl", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "absent", > "unsafe_writes": null, > "validate": null > } > }, > "item": "/usr/local/bin/kubectl", > "path": "/usr/local/bin/kubectl", > "state": "absent" >} >2018-06-12 17:07:20,875 p=5860 u=root | TASK [openshift_ca : fail] ****************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:20,875 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:2 >2018-06-12 17:07:20,889 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:20,904 p=5860 u=root | TASK [openshift_ca : Install the base package for admin tooling] **************************************************************************************************************************************************************************** >2018-06-12 17:07:20,905 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:6 >2018-06-12 17:07:20,950 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py >2018-06-12 17:07:23,068 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "attempts": 1, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "allow_downgrade": false, > "conf_file": null, > "disable_gpg_check": false, > "disablerepo": null, > "enablerepo": null, > "exclude": null, > "install_repoquery": true, > "installroot": "/", > "list": null, > "name": [ > "atomic-openshift-3.10.0*" > ], > "security": false, > "skip_broken": false, > "state": "present", > "update_cache": false, > "validate_certs": true > } > }, > "msg": "", > "rc": 0, > "results": [ > "atomic-openshift-3.10.0-0.66.0.git.0.c9a4e2b.el7.x86_64 providing atomic-openshift-3.10.0* is already installed" > ] >} >2018-06-12 17:07:23,076 p=5860 u=root | TASK [openshift_ca : Reload generated facts] ************************************************************************************************************************************************************************************************ >2018-06-12 17:07:23,076 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:16 >2018-06-12 17:07:23,105 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:23,120 p=5860 u=root | TASK [openshift_ca : Create openshift_ca_config_dir if it does not exist] ******************************************************************************************************************************************************************* >2018-06-12 17:07:23,120 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:21 >2018-06-12 17:07:23,150 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:23,347 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "diff": { > "after": { > "path": "/etc/origin/master" > }, > "before": { > "path": "/etc/origin/master" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": null, > "original_basename": null, > "owner": null, > "path": "/etc/origin/master", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "directory", > "unsafe_writes": null, > "validate": null > } > }, > "mode": "0700", > "owner": "root", > "path": "/etc/origin/master", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 4096, > "state": "directory", > "uid": 0 >} >2018-06-12 17:07:23,366 p=5860 u=root | TASK [openshift_ca : Determine if CA must be created] *************************************************************************************************************************************************************************************** >2018-06-12 17:07:23,366 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:28 >2018-06-12 17:07:23,397 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:23,603 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=ca-bundle.crt) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/ca-bundle.crt" > } > }, > "item": "ca-bundle.crt", > "stat": { > "atime": 1528820592.991222, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "714280672400dc09cb70ff722882f186665f6b48", > "ctime": 1528820584.8802187, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 159383747, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "b2df5ea175494a55370508d54232e643", > "mimetype": "text/plain", > "mode": "0644", > "mtime": 1528820584.8802187, > "nlink": 1, > "path": "/etc/origin/master/ca-bundle.crt", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 1070, > "uid": 0, > "version": "1489613721", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:23,616 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:23,820 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=ca.crt) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/ca.crt" > } > }, > "item": "ca.crt", > "stat": { > "atime": 1528820584.8792188, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "714280672400dc09cb70ff722882f186665f6b48", > "ctime": 1528820584.6422186, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 159383742, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "b2df5ea175494a55370508d54232e643", > "mimetype": "text/plain", > "mode": "0644", > "mtime": 1528820584.6422186, > "nlink": 1, > "path": "/etc/origin/master/ca.crt", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 1070, > "uid": 0, > "version": "18446744073682655378", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:23,914 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:24,125 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=ca.key) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/ca.key" > } > }, > "item": "ca.key", > "stat": { > "atime": 1528820584.8802187, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "1b038281ef62cc8b3d32e41fdb3d55515632a1c8", > "ctime": 1528820584.6432185, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 159383743, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "8d61856a3d04d2fe4bac853434938af6", > "mimetype": "text/plain", > "mode": "0600", > "mtime": 1528820584.6432185, > "nlink": 1, > "path": "/etc/origin/master/ca.key", > "pw_name": "root", > "readable": true, > "rgrp": false, > "roth": false, > "rusr": true, > "size": 1679, > "uid": 0, > "version": "493502629", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:24,148 p=5860 u=root | TASK [openshift_ca : Determine if front-proxy CA must be created] *************************************************************************************************************************************************************************** >2018-06-12 17:07:24,148 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:39 >2018-06-12 17:07:24,179 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:24,385 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=front-proxy-ca.crt) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/front-proxy-ca.crt" > } > }, > "item": "front-proxy-ca.crt", > "stat": { > "atime": 1528820589.3932204, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "a0a8600494737838624d1c3ccba0bf658b77e8e4", > "ctime": 1528820583.8152182, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 159383739, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "63748b3944cab18b9bd12c84da6c586c", > "mimetype": "text/plain", > "mode": "0644", > "mtime": 1528820583.8152182, > "nlink": 1, > "path": "/etc/origin/master/front-proxy-ca.crt", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 1070, > "uid": 0, > "version": "18446744073202731832", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:24,475 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:24,682 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=front-proxy-ca.key) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/front-proxy-ca.key" > } > }, > "item": "front-proxy-ca.key", > "stat": { > "atime": 1528820589.3932204, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "fb9580b121f9336acdaee1f5fa3c79f8de0d5afe", > "ctime": 1528820583.8152182, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 159383740, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "5bacdfa2a7d78936151fc35dfba6ef88", > "mimetype": "text/plain", > "mode": "0600", > "mtime": 1528820583.8152182, > "nlink": 1, > "path": "/etc/origin/master/front-proxy-ca.key", > "pw_name": "root", > "readable": true, > "rgrp": false, > "roth": false, > "rusr": true, > "size": 1679, > "uid": 0, > "version": "302537572", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:24,693 p=5860 u=root | TASK [openshift_ca : set_fact] ************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:24,693 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:49 >2018-06-12 17:07:24,726 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "master_ca_missing": false, > "master_front_proxy_ca_missing": false > }, > "changed": false, > "failed": false >} >2018-06-12 17:07:24,735 p=5860 u=root | TASK [openshift_ca : Retain original serviceaccount keys] *********************************************************************************************************************************************************************************** >2018-06-12 17:07:24,735 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:58 >2018-06-12 17:07:24,753 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/origin/master/serviceaccounts.private.key) => { > "changed": false, > "item": "/etc/origin/master/serviceaccounts.private.key", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:24,757 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/origin/master/serviceaccounts.public.key) => { > "changed": false, > "item": "/etc/origin/master/serviceaccounts.public.key", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:24,777 p=5860 u=root | TASK [openshift_ca : Deploy master ca certificate] ****************************************************************************************************************************************************************************************** >2018-06-12 17:07:24,777 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:68 >2018-06-12 17:07:24,795 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={u'dest': u'ca.crt', u'src': u''}) => { > "changed": false, > "item": { > "dest": "ca.crt", > "src": "" > }, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:24,801 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={u'dest': u'ca.key', u'src': u''}) => { > "changed": false, > "item": { > "dest": "ca.key", > "src": "" > }, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:24,809 p=5860 u=root | TASK [openshift_ca : Deploy additional ca] ************************************************************************************************************************************************************************************************** >2018-06-12 17:07:24,809 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:83 >2018-06-12 17:07:24,823 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:24,839 p=5860 u=root | TASK [openshift_ca : Create ca serial] ****************************************************************************************************************************************************************************************************** >2018-06-12 17:07:24,839 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:90 >2018-06-12 17:07:24,854 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:24,862 p=5860 u=root | TASK [openshift_ca : find] ****************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:24,862 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:99 >2018-06-12 17:07:25,053 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/find.py >2018-06-12 17:07:25,256 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "examined": 0, > "failed": false, > "files": [], > "invocation": { > "module_args": { > "age": null, > "age_stamp": "mtime", > "contains": null, > "file_type": "file", > "follow": false, > "get_checksum": false, > "hidden": false, > "paths": [ > "/etc/origin/master/legacy-ca/" > ], > "patterns": [ > ".*-ca.crt" > ], > "recurse": false, > "size": null, > "use_regex": true > } > }, > "matched": 0, > "msg": "/etc/origin/master/legacy-ca/ was skipped as it does not seem to be a valid directory or it cannot be accessed\n" >} >2018-06-12 17:07:25,272 p=5860 u=root | TASK [openshift_ca : Create the front-proxy CA if it does not already exist] **************************************************************************************************************************************************************** >2018-06-12 17:07:25,272 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:106 >2018-06-12 17:07:25,289 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:25,304 p=5860 u=root | TASK [openshift_ca : Create the master certificates if they do not already exist] *********************************************************************************************************************************************************** >2018-06-12 17:07:25,304 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:119 >2018-06-12 17:07:25,320 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:25,335 p=5860 u=root | TASK [openshift_ca : command] *************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:25,335 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:147 >2018-06-12 17:07:25,363 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:07:25,559 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": true, > "cmd": [ > "mktemp", > "-d", > "/tmp/openshift-ansible-XXXXXX" > ], > "delta": "0:00:00.002301", > "end": "2018-06-12 17:07:25.545285", > "failed": false, > "invocation": { > "module_args": { > "_raw_params": "mktemp -d /tmp/openshift-ansible-XXXXXX", > "_uses_shell": false, > "chdir": null, > "creates": null, > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "rc": 0, > "start": "2018-06-12 17:07:25.542984", > "stderr": "", > "stderr_lines": [], > "stdout": "/tmp/openshift-ansible-dfYjqz", > "stdout_lines": [ > "/tmp/openshift-ansible-dfYjqz" > ] >} >2018-06-12 17:07:25,570 p=5860 u=root | TASK [openshift_ca : copy] ****************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:25,570 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:150 >2018-06-12 17:07:25,598 p=5860 u=root | TASK [openshift_ca : copy] ****************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:25,599 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:157 >2018-06-12 17:07:25,629 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py >2018-06-12 17:07:25,828 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": true, > "checksum": "714280672400dc09cb70ff722882f186665f6b48", > "dest": "/tmp/openshift-ansible-dfYjqz/ca.crt", > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": false, > "content": null, > "delimiter": null, > "dest": "/tmp/openshift-ansible-dfYjqz/ca.crt", > "directory_mode": null, > "follow": false, > "force": true, > "group": null, > "local_follow": null, > "mode": null, > "original_basename": null, > "owner": null, > "regexp": null, > "remote_src": true, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "/etc/origin/master/ca.crt", > "unsafe_writes": null, > "validate": null > } > }, > "md5sum": "b2df5ea175494a55370508d54232e643", > "mode": "0644", > "owner": "root", > "secontext": "unconfined_u:object_r:user_tmp_t:s0", > "size": 1070, > "src": "/etc/origin/master/ca.crt", > "state": "file", > "uid": 0 >} >2018-06-12 17:07:25,845 p=5860 u=root | TASK [openshift_ca : assemble] ************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:25,845 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:163 >2018-06-12 17:07:25,877 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/assemble.py >2018-06-12 17:07:26,073 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "checksum": "714280672400dc09cb70ff722882f186665f6b48", > "dest": "/etc/origin/master/client-ca-bundle.crt", > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": false, > "content": null, > "delimiter": null, > "dest": "/etc/origin/master/client-ca-bundle.crt", > "directory_mode": null, > "follow": false, > "force": null, > "group": "root", > "ignore_hidden": false, > "mode": 420, > "owner": "root", > "regexp": null, > "remote_src": false, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "/tmp/openshift-ansible-dfYjqz", > "unsafe_writes": null, > "validate": null > } > }, > "md5sum": "b2df5ea175494a55370508d54232e643", > "mode": "0644", > "msg": "OK", > "owner": "root", > "secontext": "system_u:object_r:etc_t:s0", > "size": 1070, > "src": "/tmp/openshift-ansible-dfYjqz", > "state": "file", > "uid": 0 >} >2018-06-12 17:07:26,089 p=5860 u=root | TASK [openshift_ca : Test local loopback context] ******************************************************************************************************************************************************************************************* >2018-06-12 17:07:26,089 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:172 >2018-06-12 17:07:26,133 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:07:26,423 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "cmd": [ > "oc", > "config", > "view", > "--config=/etc/origin/master/openshift-master.kubeconfig" > ], > "delta": "0:00:00.089115", > "end": "2018-06-12 17:07:26.405611", > "failed": false, > "invocation": { > "module_args": { > "_raw_params": "oc config view --config=/etc/origin/master/openshift-master.kubeconfig", > "_uses_shell": false, > "chdir": null, > "creates": null, > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "rc": 0, > "start": "2018-06-12 17:07:26.316496", > "stderr": "", > "stderr_lines": [], > "stdout": "apiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: REDACTED\n server: https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443\n name: ec2-54-186-168-249-us-west-2-compute-amazonaws-com:8443\n- cluster:\n certificate-authority-data: REDACTED\n server: https://ip-172-31-50-118.us-west-2.compute.internal:8443\n name: ip-172-31-50-118-us-west-2-compute-internal:8443\ncontexts:\n- context:\n cluster: ec2-54-186-168-249-us-west-2-compute-amazonaws-com:8443\n namespace: default\n user: system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443\n name: default/ec2-54-186-168-249-us-west-2-compute-amazonaws-com:8443/system:openshift-master\n- context:\n cluster: ip-172-31-50-118-us-west-2-compute-internal:8443\n namespace: default\n user: system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443\n name: default/ip-172-31-50-118-us-west-2-compute-internal:8443/system:openshift-master\ncurrent-context: default/ip-172-31-50-118-us-west-2-compute-internal:8443/system:openshift-master\nkind: Config\npreferences: {}\nusers:\n- name: system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443\n user:\n client-certificate-data: REDACTED\n client-key-data: REDACTED", > "stdout_lines": [ > "apiVersion: v1", > "clusters:", > "- cluster:", > " certificate-authority-data: REDACTED", > " server: https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443", > " name: ec2-54-186-168-249-us-west-2-compute-amazonaws-com:8443", > "- cluster:", > " certificate-authority-data: REDACTED", > " server: https://ip-172-31-50-118.us-west-2.compute.internal:8443", > " name: ip-172-31-50-118-us-west-2-compute-internal:8443", > "contexts:", > "- context:", > " cluster: ec2-54-186-168-249-us-west-2-compute-amazonaws-com:8443", > " namespace: default", > " user: system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443", > " name: default/ec2-54-186-168-249-us-west-2-compute-amazonaws-com:8443/system:openshift-master", > "- context:", > " cluster: ip-172-31-50-118-us-west-2-compute-internal:8443", > " namespace: default", > " user: system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443", > " name: default/ip-172-31-50-118-us-west-2-compute-internal:8443/system:openshift-master", > "current-context: default/ip-172-31-50-118-us-west-2-compute-internal:8443/system:openshift-master", > "kind: Config", > "preferences: {}", > "users:", > "- name: system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443", > " user:", > " client-certificate-data: REDACTED", > " client-key-data: REDACTED" > ] >} >2018-06-12 17:07:26,440 p=5860 u=root | TASK [openshift_ca : Create temp directory for loopback master client config] *************************************************************************************************************************************************************** >2018-06-12 17:07:26,440 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:186 >2018-06-12 17:07:26,458 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:26,473 p=5860 u=root | TASK [openshift_ca : Generate the loopback master client config] **************************************************************************************************************************************************************************** >2018-06-12 17:07:26,474 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:189 >2018-06-12 17:07:26,490 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:26,508 p=5860 u=root | TASK [openshift_ca : Copy generated loopback master client config to master config dir] ***************************************************************************************************************************************************** >2018-06-12 17:07:26,508 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:206 >2018-06-12 17:07:26,527 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=openshift-master.crt) => { > "changed": false, > "item": "openshift-master.crt", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:26,533 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=openshift-master.key) => { > "changed": false, > "item": "openshift-master.key", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:26,539 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=openshift-master.kubeconfig) => { > "changed": false, > "item": "openshift-master.kubeconfig", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:26,555 p=5860 u=root | TASK [openshift_ca : Delete temp directory] ************************************************************************************************************************************************************************************************* >2018-06-12 17:07:26,555 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:215 >2018-06-12 17:07:26,572 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:26,587 p=5860 u=root | TASK [openshift_ca : Create temp directory for loopback master client config] *************************************************************************************************************************************************************** >2018-06-12 17:07:26,587 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:228 >2018-06-12 17:07:26,615 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:07:26,811 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": true, > "cmd": [ > "mktemp", > "-d", > "/tmp/openshift-ansible-XXXXXX" > ], > "delta": "0:00:00.002246", > "end": "2018-06-12 17:07:26.797494", > "failed": false, > "invocation": { > "module_args": { > "_raw_params": "mktemp -d /tmp/openshift-ansible-XXXXXX", > "_uses_shell": false, > "chdir": null, > "creates": null, > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "rc": 0, > "start": "2018-06-12 17:07:26.795248", > "stderr": "", > "stderr_lines": [], > "stdout": "/tmp/openshift-ansible-q07AAa", > "stdout_lines": [ > "/tmp/openshift-ansible-q07AAa" > ] >} >2018-06-12 17:07:26,827 p=5860 u=root | TASK [openshift_ca : Generate the aggregator api-client config] ***************************************************************************************************************************************************************************** >2018-06-12 17:07:26,827 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:231 >2018-06-12 17:07:26,875 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:07:27,354 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": true, > "cmd": [ > "oc", > "adm", > "create-api-client-config", > "--certificate-authority=/etc/origin/master/ca.crt", > "--client-dir=/tmp/openshift-ansible-q07AAa", > "--user=aggregator-front-proxy", > "--signer-cert=/etc/origin/master/front-proxy-ca.crt", > "--signer-key=/etc/origin/master/front-proxy-ca.key", > "--signer-serial=/etc/origin/master/ca.serial.txt", > "--expire-days=730" > ], > "delta": "0:00:00.284092", > "end": "2018-06-12 17:07:27.337880", > "failed": false, > "invocation": { > "module_args": { > "_raw_params": "oc adm create-api-client-config\n --certificate-authority=/etc/origin/master/ca.crt\n --client-dir=/tmp/openshift-ansible-q07AAa\n --user=aggregator-front-proxy\n --signer-cert=\"/etc/origin/master/front-proxy-ca.crt\"\n --signer-key=\"/etc/origin/master/front-proxy-ca.key\"\n --signer-serial=/etc/origin/master/ca.serial.txt\n --expire-days=730", > "_uses_shell": false, > "chdir": null, > "creates": null, > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "rc": 0, > "start": "2018-06-12 17:07:27.053788", > "stderr": "", > "stderr_lines": [], > "stdout": "", > "stdout_lines": [] >} >2018-06-12 17:07:27,373 p=5860 u=root | TASK [openshift_ca : Copy generated loopback master client config to master config dir] ***************************************************************************************************************************************************** >2018-06-12 17:07:27,373 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:244 >2018-06-12 17:07:27,404 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py >2018-06-12 17:07:27,603 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=aggregator-front-proxy.crt) => { > "changed": true, > "checksum": "a6733cb830cd5a61657d6f5777d21b81f9cd4f1a", > "dest": "/etc/origin/master/aggregator-front-proxy.crt", > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": false, > "content": null, > "delimiter": null, > "dest": "/etc/origin/master/aggregator-front-proxy.crt", > "directory_mode": null, > "follow": false, > "force": true, > "group": null, > "local_follow": null, > "mode": null, > "original_basename": null, > "owner": null, > "regexp": null, > "remote_src": true, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "/tmp/openshift-ansible-q07AAa/aggregator-front-proxy.crt", > "unsafe_writes": null, > "validate": null > } > }, > "item": "aggregator-front-proxy.crt", > "md5sum": "336dd3987deed1c06daf9967c02ce076", > "mode": "0644", > "owner": "root", > "secontext": "system_u:object_r:etc_t:s0", > "size": 1090, > "src": "/tmp/openshift-ansible-q07AAa/aggregator-front-proxy.crt", > "state": "file", > "uid": 0 >} >2018-06-12 17:07:27,618 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py >2018-06-12 17:07:27,821 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=aggregator-front-proxy.key) => { > "changed": true, > "checksum": "dadaa8b8795dd54f12965066b41ec48a48d126a9", > "dest": "/etc/origin/master/aggregator-front-proxy.key", > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": false, > "content": null, > "delimiter": null, > "dest": "/etc/origin/master/aggregator-front-proxy.key", > "directory_mode": null, > "follow": false, > "force": true, > "group": null, > "local_follow": null, > "mode": null, > "original_basename": null, > "owner": null, > "regexp": null, > "remote_src": true, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "/tmp/openshift-ansible-q07AAa/aggregator-front-proxy.key", > "unsafe_writes": null, > "validate": null > } > }, > "item": "aggregator-front-proxy.key", > "md5sum": "a8276ac9bd41cce98a4486959bd59fd9", > "mode": "0644", > "owner": "root", > "secontext": "system_u:object_r:etc_t:s0", > "size": 1675, > "src": "/tmp/openshift-ansible-q07AAa/aggregator-front-proxy.key", > "state": "file", > "uid": 0 >} >2018-06-12 17:07:27,836 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py >2018-06-12 17:07:28,036 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=aggregator-front-proxy.kubeconfig) => { > "changed": true, > "checksum": "3cfc7205728d0aff26d3225ef7d61630cc9113ba", > "dest": "/etc/origin/master/aggregator-front-proxy.kubeconfig", > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": false, > "content": null, > "delimiter": null, > "dest": "/etc/origin/master/aggregator-front-proxy.kubeconfig", > "directory_mode": null, > "follow": false, > "force": true, > "group": null, > "local_follow": null, > "mode": null, > "original_basename": null, > "owner": null, > "regexp": null, > "remote_src": true, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "/tmp/openshift-ansible-q07AAa/aggregator-front-proxy.kubeconfig", > "unsafe_writes": null, > "validate": null > } > }, > "item": "aggregator-front-proxy.kubeconfig", > "md5sum": "1eb8d4ed914282c62d95210ef0e9fc16", > "mode": "0644", > "owner": "root", > "secontext": "system_u:object_r:etc_t:s0", > "size": 5626, > "src": "/tmp/openshift-ansible-q07AAa/aggregator-front-proxy.kubeconfig", > "state": "file", > "uid": 0 >} >2018-06-12 17:07:28,053 p=5860 u=root | TASK [openshift_ca : Delete temp directory] ************************************************************************************************************************************************************************************************* >2018-06-12 17:07:28,053 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:253 >2018-06-12 17:07:28,083 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:28,280 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": true, > "diff": { > "after": { > "path": "/tmp/openshift-ansible-q07AAa", > "state": "absent" > }, > "before": { > "path": "/tmp/openshift-ansible-q07AAa", > "state": "directory" > } > }, > "failed": false, > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": null, > "name": "/tmp/openshift-ansible-q07AAa", > "original_basename": null, > "owner": null, > "path": "/tmp/openshift-ansible-q07AAa", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "absent", > "unsafe_writes": null, > "validate": null > } > }, > "path": "/tmp/openshift-ansible-q07AAa", > "state": "absent" >} >2018-06-12 17:07:28,289 p=5860 u=root | TASK [openshift_ca : Restore original serviceaccount keys] ********************************************************************************************************************************************************************************** >2018-06-12 17:07:28,289 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:260 >2018-06-12 17:07:28,308 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/origin/master/serviceaccounts.private.key) => { > "changed": false, > "item": "/etc/origin/master/serviceaccounts.private.key", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:28,312 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/origin/master/serviceaccounts.public.key) => { > "changed": false, > "item": "/etc/origin/master/serviceaccounts.public.key", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:28,320 p=5860 u=root | TASK [openshift_ca : Remove backup serviceaccount keys] ************************************************************************************************************************************************************************************* >2018-06-12 17:07:28,320 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_ca/tasks/main.yml:270 >2018-06-12 17:07:28,340 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/origin/master/serviceaccounts.private.key) => { > "changed": false, > "item": "/etc/origin/master/serviceaccounts.private.key", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:28,343 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/origin/master/serviceaccounts.public.key) => { > "changed": false, > "item": "/etc/origin/master/serviceaccounts.public.key", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:28,351 p=5860 u=root | TASK [openshift_master_certificates : Check status of master certificates] ****************************************************************************************************************************************************************** >2018-06-12 17:07:28,351 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:2 >2018-06-12 17:07:28,381 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:28,587 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=admin.crt) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/admin.crt" > } > }, > "item": "admin.crt", > "stat": { > "atime": 1528820586.0932193, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "81d10b2dcf3abb580bca11ee1da82619a198f2ba", > "ctime": 1528820586.092219, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 159383765, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "1271886790f9b27c09c35b40bbbd7694", > "mimetype": "text/plain", > "mode": "0644", > "mtime": 1528820586.092219, > "nlink": 1, > "path": "/etc/origin/master/admin.crt", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 1151, > "uid": 0, > "version": "876052220", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:28,677 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:28,886 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=ca.crt) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/ca.crt" > } > }, > "item": "ca.crt", > "stat": { > "atime": 1528820584.8792188, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "714280672400dc09cb70ff722882f186665f6b48", > "ctime": 1528820584.6422186, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 159383742, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "b2df5ea175494a55370508d54232e643", > "mimetype": "text/plain", > "mode": "0644", > "mtime": 1528820584.6422186, > "nlink": 1, > "path": "/etc/origin/master/ca.crt", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 1070, > "uid": 0, > "version": "18446744073682655378", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:28,902 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:29,111 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=ca-bundle.crt) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/ca-bundle.crt" > } > }, > "item": "ca-bundle.crt", > "stat": { > "atime": 1528820592.991222, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "714280672400dc09cb70ff722882f186665f6b48", > "ctime": 1528820584.8802187, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 159383747, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "b2df5ea175494a55370508d54232e643", > "mimetype": "text/plain", > "mode": "0644", > "mtime": 1528820584.8802187, > "nlink": 1, > "path": "/etc/origin/master/ca-bundle.crt", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 1070, > "uid": 0, > "version": "1489613721", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:29,126 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:29,337 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=front-proxy-ca.crt) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/front-proxy-ca.crt" > } > }, > "item": "front-proxy-ca.crt", > "stat": { > "atime": 1528820589.3932204, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "a0a8600494737838624d1c3ccba0bf658b77e8e4", > "ctime": 1528820583.8152182, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 159383739, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "63748b3944cab18b9bd12c84da6c586c", > "mimetype": "text/plain", > "mode": "0644", > "mtime": 1528820583.8152182, > "nlink": 1, > "path": "/etc/origin/master/front-proxy-ca.crt", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 1070, > "uid": 0, > "version": "18446744073202731832", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:29,351 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:29,564 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=master.kubelet-client.crt) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/master.kubelet-client.crt" > } > }, > "item": "master.kubelet-client.crt", > "stat": { > "atime": 1528820585.4082189, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "c99a9e945ec0234fd6ca55f3f0a393eaaf8bd028", > "ctime": 1528820585.407219, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 159383751, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "fbcd65edf45529cc09a2a175fd2d67c4", > "mimetype": "text/plain", > "mode": "0644", > "mtime": 1528820585.407219, > "nlink": 1, > "path": "/etc/origin/master/master.kubelet-client.crt", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 1135, > "uid": 0, > "version": "18446744071734915953", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:29,580 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:29,792 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=master.proxy-client.crt) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/master.proxy-client.crt" > } > }, > "item": "master.proxy-client.crt", > "stat": { > "atime": 1528820594.4322224, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "8082a6af4ad96a88243108df7ff3a684b1ca7919", > "ctime": 1528820585.414219, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 159383753, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "cd0f339c348851964ed3d8ded6644a57", > "mimetype": "text/plain", > "mode": "0644", > "mtime": 1528820585.414219, > "nlink": 1, > "path": "/etc/origin/master/master.proxy-client.crt", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 1086, > "uid": 0, > "version": "18446744072144669614", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:29,812 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:30,053 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=master.server.crt) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/master.server.crt" > } > }, > "item": "master.server.crt", > "stat": { > "atime": 1528820594.9112227, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "618d95f300176964638155cc5bcd80ce036a4b5a", > "ctime": 1528820585.842219, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 159383761, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "2bbf24408b510f8368f23ce9da7981ad", > "mimetype": "text/plain", > "mode": "0644", > "mtime": 1528820585.842219, > "nlink": 1, > "path": "/etc/origin/master/master.server.crt", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 2636, > "uid": 0, > "version": "2110397965", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:30,068 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:30,275 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=openshift-master.crt) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/openshift-master.crt" > } > }, > "item": "openshift-master.crt", > "stat": { > "atime": 1528820595.3902228, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "50dab574dd33be723c52f07912542f2574ee548a", > "ctime": 1528820585.1032188, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 159383748, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "1e708f135f2b11203c2ec7ec13d67122", > "mimetype": "text/plain", > "mode": "0644", > "mtime": 1528820585.1032188, > "nlink": 1, > "path": "/etc/origin/master/openshift-master.crt", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 1123, > "uid": 0, > "version": "1386575443", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:30,290 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:30,497 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=service-signer.crt) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/service-signer.crt" > } > }, > "item": "service-signer.crt", > "stat": { > "atime": 1528820595.968223, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "3cfec1e983d41fc7d5ab9c32d93a6800874c48cb", > "ctime": 1528820585.751219, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 159383759, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "7e9221ae436bd50e8b035df0fbdeeaef", > "mimetype": "text/plain", > "mode": "0644", > "mtime": 1528820585.751219, > "nlink": 1, > "path": "/etc/origin/master/service-signer.crt", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 1115, > "uid": 0, > "version": "18446744072900714761", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:30,512 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:30,717 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=aggregator-front-proxy.crt) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/aggregator-front-proxy.crt" > } > }, > "item": "aggregator-front-proxy.crt", > "stat": { > "atime": 1528823247.333369, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "a6733cb830cd5a61657d6f5777d21b81f9cd4f1a", > "ctime": 1528823247.5843697, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 159383774, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "336dd3987deed1c06daf9967c02ce076", > "mimetype": "text/plain", > "mode": "0644", > "mtime": 1528823247.332369, > "nlink": 1, > "path": "/etc/origin/master/aggregator-front-proxy.crt", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 1090, > "uid": 0, > "version": "1728024912", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:30,737 p=5860 u=root | TASK [openshift_master_certificates : set_fact] ********************************************************************************************************************************************************************************************* >2018-06-12 17:07:30,737 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:19 >2018-06-12 17:07:30,773 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "master_certs_missing": false > }, > "changed": false, > "failed": false >} >2018-06-12 17:07:30,790 p=5860 u=root | TASK [openshift_master_certificates : Ensure the generated_configs directory present] ******************************************************************************************************************************************************* >2018-06-12 17:07:30,790 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:26 >2018-06-12 17:07:30,806 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:30,822 p=5860 u=root | TASK [openshift_master_certificates : find] ************************************************************************************************************************************************************************************************* >2018-06-12 17:07:30,822 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:34 >2018-06-12 17:07:30,850 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/find.py >2018-06-12 17:07:31,044 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "examined": 0, > "failed": false, > "files": [], > "invocation": { > "module_args": { > "age": null, > "age_stamp": "mtime", > "contains": null, > "file_type": "file", > "follow": false, > "get_checksum": false, > "hidden": false, > "paths": [ > "/etc/origin/master/legacy-ca/" > ], > "patterns": [ > ".*-ca.crt" > ], > "recurse": false, > "size": null, > "use_regex": true > } > }, > "matched": 0, > "msg": "/etc/origin/master/legacy-ca/ was skipped as it does not seem to be a valid directory or it cannot be accessed\n" >} >2018-06-12 17:07:31,069 p=5860 u=root | TASK [openshift_master_certificates : Create the master server certificate] ***************************************************************************************************************************************************************** >2018-06-12 17:07:31,069 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:41 >2018-06-12 17:07:31,109 p=5860 u=root | TASK [openshift_master_certificates : Generate the loopback master client config] *********************************************************************************************************************************************************** >2018-06-12 17:07:31,109 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:65 >2018-06-12 17:07:31,176 p=5860 u=root | TASK [openshift_master_certificates : copy] ************************************************************************************************************************************************************************************************* >2018-06-12 17:07:31,176 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:91 >2018-06-12 17:07:31,205 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=admin.crt) => { > "changed": false, > "item": "admin.crt", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,214 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=admin.key) => { > "changed": false, > "item": "admin.key", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,218 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=admin.kubeconfig) => { > "changed": false, > "item": "admin.kubeconfig", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,229 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=aggregator-front-proxy.crt) => { > "changed": false, > "item": "aggregator-front-proxy.crt", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,235 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=aggregator-front-proxy.key) => { > "changed": false, > "item": "aggregator-front-proxy.key", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,243 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=aggregator-front-proxy.kubeconfig) => { > "changed": false, > "item": "aggregator-front-proxy.kubeconfig", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,249 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=front-proxy-ca.crt) => { > "changed": false, > "item": "front-proxy-ca.crt", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,256 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=front-proxy-ca.key) => { > "changed": false, > "item": "front-proxy-ca.key", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,263 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=master.kubelet-client.crt) => { > "changed": false, > "item": "master.kubelet-client.crt", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,271 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=master.kubelet-client.key) => { > "changed": false, > "item": "master.kubelet-client.key", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,279 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=master.proxy-client.crt) => { > "changed": false, > "item": "master.proxy-client.crt", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,286 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=master.proxy-client.key) => { > "changed": false, > "item": "master.proxy-client.key", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,294 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=service-signer.crt) => { > "changed": false, > "item": "service-signer.crt", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,301 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=service-signer.key) => { > "changed": false, > "item": "service-signer.key", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,310 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=ca-bundle.crt) => { > "changed": false, > "item": "ca-bundle.crt", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,316 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=ca.crt) => { > "changed": false, > "item": "ca.crt", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,323 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=ca.key) => { > "changed": false, > "item": "ca.key", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,330 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=client-ca-bundle.crt) => { > "changed": false, > "item": "client-ca-bundle.crt", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,337 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=serviceaccounts.private.key) => { > "changed": false, > "item": "serviceaccounts.private.key", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,343 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=serviceaccounts.public.key) => { > "changed": false, > "item": "serviceaccounts.public.key", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,362 p=5860 u=root | TASK [openshift_master_certificates : Remove generated etcd client certs when using external etcd] ****************************************************************************************************************************************** >2018-06-12 17:07:31,362 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:119 >2018-06-12 17:07:31,479 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:31,689 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=master.etcd-client.crt) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": null, > "original_basename": null, > "owner": null, > "path": "/etc/origin/generated-configs/master-ip-172-31-50-118.us-west-2.compute.internal/master.etcd-client.crt", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "absent", > "unsafe_writes": null, > "validate": null > } > }, > "item": "master.etcd-client.crt", > "path": "/etc/origin/generated-configs/master-ip-172-31-50-118.us-west-2.compute.internal/master.etcd-client.crt", > "state": "absent" >} >2018-06-12 17:07:31,711 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:31,919 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com -> ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=master.etcd-client.key) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": null, > "original_basename": null, > "owner": null, > "path": "/etc/origin/generated-configs/master-ip-172-31-50-118.us-west-2.compute.internal/master.etcd-client.key", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "absent", > "unsafe_writes": null, > "validate": null > } > }, > "item": "master.etcd-client.key", > "path": "/etc/origin/generated-configs/master-ip-172-31-50-118.us-west-2.compute.internal/master.etcd-client.key", > "state": "absent" >} >2018-06-12 17:07:31,932 p=5860 u=root | TASK [openshift_master_certificates : Create local temp directory for syncing certs] ******************************************************************************************************************************************************** >2018-06-12 17:07:31,932 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:130 >2018-06-12 17:07:31,948 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,960 p=5860 u=root | TASK [openshift_master_certificates : Chmod local temp directory for syncing certs] ********************************************************************************************************************************************************* >2018-06-12 17:07:31,960 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:136 >2018-06-12 17:07:31,975 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:31,989 p=5860 u=root | TASK [openshift_master_certificates : Create a tarball of the master certs] ***************************************************************************************************************************************************************** >2018-06-12 17:07:31,990 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:141 >2018-06-12 17:07:32,007 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:32,022 p=5860 u=root | TASK [openshift_master_certificates : Retrieve the master cert tarball from the master] ***************************************************************************************************************************************************** >2018-06-12 17:07:32,022 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:150 >2018-06-12 17:07:32,041 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:32,047 p=5860 u=root | TASK [openshift_master_certificates : Ensure certificate directory exists] ****************************************************************************************************************************************************************** >2018-06-12 17:07:32,047 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:160 >2018-06-12 17:07:32,061 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:32,069 p=5860 u=root | TASK [openshift_master_certificates : Unarchive the tarball on the master] ****************************************************************************************************************************************************************** >2018-06-12 17:07:32,069 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:166 >2018-06-12 17:07:32,084 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:32,095 p=5860 u=root | TASK [openshift_master_certificates : Delete local temp directory] ************************************************************************************************************************************************************************** >2018-06-12 17:07:32,095 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:172 >2018-06-12 17:07:32,112 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:32,121 p=5860 u=root | TASK [openshift_master_certificates : Lookup default group for ansible_ssh_user] ************************************************************************************************************************************************************ >2018-06-12 17:07:32,121 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:177 >2018-06-12 17:07:32,154 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:07:32,393 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "cmd": [ > "/usr/bin/id", > "-g", > "root" > ], > "delta": "0:00:00.002364", > "end": "2018-06-12 17:07:32.379217", > "failed": false, > "invocation": { > "module_args": { > "_raw_params": "/usr/bin/id -g root", > "_uses_shell": false, > "chdir": null, > "creates": null, > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "rc": 0, > "start": "2018-06-12 17:07:32.376853", > "stderr": "", > "stderr_lines": [], > "stdout": "0", > "stdout_lines": [ > "0" > ] >} >2018-06-12 17:07:32,401 p=5860 u=root | TASK [openshift_master_certificates : set_fact] ********************************************************************************************************************************************************************************************* >2018-06-12 17:07:32,401 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:182 >2018-06-12 17:07:32,431 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "client_users": [ > "root" > ] > }, > "changed": false, > "failed": false >} >2018-06-12 17:07:32,439 p=5860 u=root | TASK [openshift_master_certificates : Create the client config dir(s)] ********************************************************************************************************************************************************************** >2018-06-12 17:07:32,439 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:185 >2018-06-12 17:07:32,471 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:32,673 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=root) => { > "changed": false, > "diff": { > "after": { > "path": "/root/.kube" > }, > "before": { > "path": "/root/.kube" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": "root", > "mode": 448, > "original_basename": null, > "owner": "root", > "path": "/root/.kube", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "directory", > "unsafe_writes": null, > "validate": null > } > }, > "item": "root", > "mode": "0700", > "owner": "root", > "path": "/root/.kube", > "secontext": "unconfined_u:object_r:admin_home_t:s0", > "size": 20, > "state": "directory", > "uid": 0 >} >2018-06-12 17:07:32,682 p=5860 u=root | TASK [openshift_master_certificates : Copy the admin client config(s)] ********************************************************************************************************************************************************************** >2018-06-12 17:07:32,682 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:196 >2018-06-12 17:07:32,715 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py >2018-06-12 17:07:32,919 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=root) => { > "changed": false, > "dest": "/root/.kube/config", > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": false, > "content": null, > "delimiter": null, > "dest": "/root/.kube/config", > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "local_follow": null, > "mode": null, > "original_basename": null, > "owner": null, > "regexp": null, > "remote_src": true, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "/etc/origin/master/admin.kubeconfig", > "unsafe_writes": null, > "validate": null > } > }, > "item": "root", > "mode": "0700", > "msg": "file already exists", > "owner": "root", > "secontext": "system_u:object_r:admin_home_t:s0", > "size": 7776, > "src": "/etc/origin/master/admin.kubeconfig", > "state": "file", > "uid": 0 >} >2018-06-12 17:07:32,928 p=5860 u=root | TASK [openshift_master_certificates : Update the permissions on the admin client config(s)] ************************************************************************************************************************************************* >2018-06-12 17:07:32,928 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:204 >2018-06-12 17:07:32,960 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:33,161 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=root) => { > "changed": false, > "diff": { > "after": { > "path": "/root/.kube/config" > }, > "before": { > "path": "/root/.kube/config" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": "root", > "mode": 448, > "original_basename": null, > "owner": "root", > "path": "/root/.kube/config", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "file", > "unsafe_writes": null, > "validate": null > } > }, > "item": "root", > "mode": "0700", > "owner": "root", > "path": "/root/.kube/config", > "secontext": "system_u:object_r:admin_home_t:s0", > "size": 7776, > "state": "file", > "uid": 0 >} >2018-06-12 17:07:33,170 p=5860 u=root | TASK [openshift_master_certificates : Check for ca-bundle.crt] ****************************************************************************************************************************************************************************** >2018-06-12 17:07:33,171 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:214 >2018-06-12 17:07:33,199 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:33,404 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "failed_when_result": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/ca-bundle.crt" > } > }, > "stat": { > "atime": 1528820592.991222, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "714280672400dc09cb70ff722882f186665f6b48", > "ctime": 1528820584.8802187, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 159383747, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "b2df5ea175494a55370508d54232e643", > "mimetype": "text/plain", > "mode": "0644", > "mtime": 1528820584.8802187, > "nlink": 1, > "path": "/etc/origin/master/ca-bundle.crt", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 1070, > "uid": 0, > "version": "1489613721", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:33,413 p=5860 u=root | TASK [openshift_master_certificates : Check for ca.crt] ************************************************************************************************************************************************************************************* >2018-06-12 17:07:33,413 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:220 >2018-06-12 17:07:33,519 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:33,726 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "failed_when_result": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/ca.crt" > } > }, > "stat": { > "atime": 1528820584.8792188, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "714280672400dc09cb70ff722882f186665f6b48", > "ctime": 1528820584.6422186, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 159383742, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "b2df5ea175494a55370508d54232e643", > "mimetype": "text/plain", > "mode": "0644", > "mtime": 1528820584.6422186, > "nlink": 1, > "path": "/etc/origin/master/ca.crt", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 1070, > "uid": 0, > "version": "18446744073682655378", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:33,735 p=5860 u=root | TASK [openshift_master_certificates : Migrate ca.crt to ca-bundle.crt] ********************************************************************************************************************************************************************** >2018-06-12 17:07:33,735 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:226 >2018-06-12 17:07:33,749 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:33,757 p=5860 u=root | TASK [openshift_master_certificates : Link ca.crt to ca-bundle.crt] ************************************************************************************************************************************************************************* >2018-06-12 17:07:33,757 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_certificates/tasks/main.yml:232 >2018-06-12 17:07:33,771 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:33,772 p=5860 u=root | META: ran handlers >2018-06-12 17:07:33,772 p=5860 u=root | META: ran handlers >2018-06-12 17:07:33,776 p=5860 u=root | PLAY [Disable excluders and gather facts] *************************************************************************************************************************************************************************************************** >2018-06-12 17:07:33,784 p=5860 u=root | TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:33,810 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:07:34,197 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] >2018-06-12 17:07:34,215 p=5860 u=root | META: ran handlers >2018-06-12 17:07:34,222 p=5860 u=root | TASK [openshift_excluder : Detecting Atomic Host Operating System] ************************************************************************************************************************************************************************** >2018-06-12 17:07:34,222 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/main.yml:2 >2018-06-12 17:07:34,248 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:34,445 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/run/ostree-booted" > } > }, > "stat": { > "exists": false > } >} >2018-06-12 17:07:34,453 p=5860 u=root | TASK [openshift_excluder : Debug r_openshift_excluder_enable_docker_excluder] *************************************************************************************************************************************************************** >2018-06-12 17:07:34,453 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/main.yml:9 >2018-06-12 17:07:34,488 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "r_openshift_excluder_enable_docker_excluder": true >} >2018-06-12 17:07:34,495 p=5860 u=root | TASK [openshift_excluder : Debug r_openshift_excluder_enable_openshift_excluder] ************************************************************************************************************************************************************ >2018-06-12 17:07:34,495 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/main.yml:13 >2018-06-12 17:07:34,529 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "r_openshift_excluder_enable_openshift_excluder": true >} >2018-06-12 17:07:34,536 p=5860 u=root | TASK [openshift_excluder : Fail if invalid openshift_excluder_action provided] ************************************************************************************************************************************************************** >2018-06-12 17:07:34,536 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/main.yml:17 >2018-06-12 17:07:34,551 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:34,558 p=5860 u=root | TASK [openshift_excluder : Fail if r_openshift_excluder_upgrade_target is not defined] ****************************************************************************************************************************************************** >2018-06-12 17:07:34,559 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/main.yml:22 >2018-06-12 17:07:34,575 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:34,582 p=5860 u=root | TASK [openshift_excluder : Include main action task file] *********************************************************************************************************************************************************************************** >2018-06-12 17:07:34,582 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/main.yml:29 >2018-06-12 17:07:34,610 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_excluder/tasks/verify_upgrade.yml >2018-06-12 17:07:34,614 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml >2018-06-12 17:07:34,623 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml >2018-06-12 17:07:34,634 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml >2018-06-12 17:07:34,644 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_excluder/tasks/install.yml >2018-06-12 17:07:34,653 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_excluder/tasks/exclude.yml >2018-06-12 17:07:34,660 p=5860 u=root | statically imported: /root/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml >2018-06-12 17:07:34,664 p=5860 u=root | included: /root/openshift-ansible/roles/openshift_excluder/tasks/disable.yml for ec2-54-186-168-249.us-west-2.compute.amazonaws.com >2018-06-12 17:07:34,678 p=5860 u=root | TASK [openshift_excluder : Get available excluder version] ********************************************************************************************************************************************************************************** >2018-06-12 17:07:34,678 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:4 >2018-06-12 17:07:34,690 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:34,697 p=5860 u=root | TASK [openshift_excluder : Fail when excluder package is not found] ************************************************************************************************************************************************************************* >2018-06-12 17:07:34,698 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:10 >2018-06-12 17:07:34,709 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:34,716 p=5860 u=root | TASK [openshift_excluder : Set fact excluder_version] *************************************************************************************************************************************************************************************** >2018-06-12 17:07:34,716 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:15 >2018-06-12 17:07:34,728 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:34,738 p=5860 u=root | TASK [openshift_excluder : atomic-openshift-docker-excluder version detected] *************************************************************************************************************************************************************** >2018-06-12 17:07:34,738 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:19 >2018-06-12 17:07:34,751 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "skip_reason": "Conditional result was False" >} >2018-06-12 17:07:34,758 p=5860 u=root | TASK [openshift_excluder : Printing upgrade target version] ********************************************************************************************************************************************************************************* >2018-06-12 17:07:34,758 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:23 >2018-06-12 17:07:34,770 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "skip_reason": "Conditional result was False" >} >2018-06-12 17:07:34,780 p=5860 u=root | TASK [openshift_excluder : Check the available atomic-openshift-docker-excluder version is at most of the upgrade target version] *********************************************************************************************************** >2018-06-12 17:07:34,780 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:27 >2018-06-12 17:07:34,792 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:34,800 p=5860 u=root | TASK [openshift_excluder : Get available excluder version] ********************************************************************************************************************************************************************************** >2018-06-12 17:07:34,800 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:4 >2018-06-12 17:07:34,812 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:34,819 p=5860 u=root | TASK [openshift_excluder : Fail when excluder package is not found] ************************************************************************************************************************************************************************* >2018-06-12 17:07:34,820 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:10 >2018-06-12 17:07:34,832 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:34,839 p=5860 u=root | TASK [openshift_excluder : Set fact excluder_version] *************************************************************************************************************************************************************************************** >2018-06-12 17:07:34,839 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:15 >2018-06-12 17:07:34,852 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:34,862 p=5860 u=root | TASK [openshift_excluder : atomic-openshift-excluder version detected] ********************************************************************************************************************************************************************** >2018-06-12 17:07:34,862 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:19 >2018-06-12 17:07:34,874 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "skip_reason": "Conditional result was False" >} >2018-06-12 17:07:34,881 p=5860 u=root | TASK [openshift_excluder : Printing upgrade target version] ********************************************************************************************************************************************************************************* >2018-06-12 17:07:34,882 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:23 >2018-06-12 17:07:34,894 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "skip_reason": "Conditional result was False" >} >2018-06-12 17:07:34,904 p=5860 u=root | TASK [openshift_excluder : Check the available atomic-openshift-excluder version is at most of the upgrade target version] ****************************************************************************************************************** >2018-06-12 17:07:34,904 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/verify_excluder.yml:27 >2018-06-12 17:07:34,917 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:34,924 p=5860 u=root | TASK [openshift_excluder : Check for docker-excluder] *************************************************************************************************************************************************************************************** >2018-06-12 17:07:34,924 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml:6 >2018-06-12 17:07:34,953 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:35,163 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/sbin/atomic-openshift-docker-excluder" > } > }, > "stat": { > "atime": 1528785386.3050418, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "b9cd3139c5b201a8580659ad5faa8af5430fd079", > "ctime": 1528785386.2780416, > "dev": 66307, > "device_type": 0, > "executable": true, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 757286, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "83a4b3c216ec5f9f3a743ea3b6ffc554", > "mimetype": "text/x-shellscript", > "mode": "0744", > "mtime": 1528745090.0, > "nlink": 1, > "path": "/sbin/atomic-openshift-docker-excluder", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 2471, > "uid": 0, > "version": "1099280457", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": true > } >} >2018-06-12 17:07:35,171 p=5860 u=root | TASK [openshift_excluder : disable docker excluder] ***************************************************************************************************************************************************************************************** >2018-06-12 17:07:35,172 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml:11 >2018-06-12 17:07:35,210 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:07:35,440 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": true, > "cmd": [ > "/sbin/atomic-openshift-docker-excluder", > "unexclude" > ], > "delta": "0:00:00.031206", > "end": "2018-06-12 17:07:35.423526", > "failed": false, > "invocation": { > "module_args": { > "_raw_params": "/sbin/atomic-openshift-docker-excluder unexclude", > "_uses_shell": false, > "chdir": null, > "creates": null, > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "rc": 0, > "start": "2018-06-12 17:07:35.392320", > "stderr": "", > "stderr_lines": [], > "stdout": "", > "stdout_lines": [] >} >2018-06-12 17:07:35,448 p=5860 u=root | TASK [openshift_excluder : Check for openshift excluder] ************************************************************************************************************************************************************************************ >2018-06-12 17:07:35,448 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml:17 >2018-06-12 17:07:35,480 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:35,698 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/sbin/atomic-openshift-excluder" > } > }, > "stat": { > "atime": 1528785393.5521164, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "333d3b466bd2eb1bdf4b729f7ecc6cfed169f854", > "ctime": 1528785393.525116, > "dev": 66307, > "device_type": 0, > "executable": true, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 757298, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "22497099e443cfda7b96a57e975a4176", > "mimetype": "text/x-shellscript", > "mode": "0744", > "mtime": 1528745090.0, > "nlink": 1, > "path": "/sbin/atomic-openshift-excluder", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 2603, > "uid": 0, > "version": "754120915", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": true > } >} >2018-06-12 17:07:35,707 p=5860 u=root | TASK [openshift_excluder : disable openshift excluder] ************************************************************************************************************************************************************************************** >2018-06-12 17:07:35,707 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml:22 >2018-06-12 17:07:35,746 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:07:35,971 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": true, > "cmd": [ > "/sbin/atomic-openshift-excluder", > "unexclude" > ], > "delta": "0:00:00.025476", > "end": "2018-06-12 17:07:35.956294", > "failed": false, > "invocation": { > "module_args": { > "_raw_params": "/sbin/atomic-openshift-excluder unexclude", > "_uses_shell": false, > "chdir": null, > "creates": null, > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "rc": 0, > "start": "2018-06-12 17:07:35.930818", > "stderr": "", > "stderr_lines": [], > "stdout": "", > "stdout_lines": [] >} >2018-06-12 17:07:35,979 p=5860 u=root | TASK [openshift_excluder : Install docker excluder - yum] *********************************************************************************************************************************************************************************** >2018-06-12 17:07:35,979 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/install.yml:9 >2018-06-12 17:07:36,106 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py >2018-06-12 17:07:38,243 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "attempts": 1, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "allow_downgrade": false, > "conf_file": null, > "disable_gpg_check": false, > "disablerepo": null, > "enablerepo": null, > "exclude": null, > "install_repoquery": true, > "installroot": "/", > "list": null, > "name": [ > "atomic-openshift-docker-excluder-3.10.0**" > ], > "security": false, > "skip_broken": false, > "state": "present", > "update_cache": false, > "validate_certs": true > } > }, > "msg": "", > "rc": 0, > "results": [ > "atomic-openshift-docker-excluder-3.10.0-0.66.0.git.0.c9a4e2b.el7.noarch providing atomic-openshift-docker-excluder-3.10.0** is already installed" > ] >} >2018-06-12 17:07:38,251 p=5860 u=root | TASK [openshift_excluder : Install docker excluder - dnf] *********************************************************************************************************************************************************************************** >2018-06-12 17:07:38,251 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/install.yml:24 >2018-06-12 17:07:38,276 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:38,283 p=5860 u=root | TASK [openshift_excluder : Install openshift excluder - yum] ******************************************************************************************************************************************************************************** >2018-06-12 17:07:38,283 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/install.yml:34 >2018-06-12 17:07:38,410 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py >2018-06-12 17:07:40,560 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "attempts": 1, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "allow_downgrade": false, > "conf_file": null, > "disable_gpg_check": false, > "disablerepo": null, > "enablerepo": null, > "exclude": null, > "install_repoquery": true, > "installroot": "/", > "list": null, > "name": [ > "atomic-openshift-excluder-3.10.0**" > ], > "security": false, > "skip_broken": false, > "state": "present", > "update_cache": false, > "validate_certs": true > } > }, > "msg": "", > "rc": 0, > "results": [ > "atomic-openshift-excluder-3.10.0-0.66.0.git.0.c9a4e2b.el7.noarch providing atomic-openshift-excluder-3.10.0** is already installed" > ] >} >2018-06-12 17:07:40,568 p=5860 u=root | TASK [openshift_excluder : Install openshift excluder - dnf] ******************************************************************************************************************************************************************************** >2018-06-12 17:07:40,568 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/install.yml:48 >2018-06-12 17:07:40,593 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:40,601 p=5860 u=root | TASK [openshift_excluder : set_fact] ******************************************************************************************************************************************************************************************************** >2018-06-12 17:07:40,601 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/install.yml:58 >2018-06-12 17:07:40,727 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "r_openshift_excluder_install_ran": true > }, > "changed": false, > "failed": false >} >2018-06-12 17:07:40,734 p=5860 u=root | TASK [openshift_excluder : Check for docker-excluder] *************************************************************************************************************************************************************************************** >2018-06-12 17:07:40,735 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/exclude.yml:2 >2018-06-12 17:07:40,854 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:41,067 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/sbin/atomic-openshift-docker-excluder" > } > }, > "stat": { > "atime": 1528785386.3050418, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "b9cd3139c5b201a8580659ad5faa8af5430fd079", > "ctime": 1528785386.2780416, > "dev": 66307, > "device_type": 0, > "executable": true, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 757286, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "83a4b3c216ec5f9f3a743ea3b6ffc554", > "mimetype": "text/x-shellscript", > "mode": "0744", > "mtime": 1528745090.0, > "nlink": 1, > "path": "/sbin/atomic-openshift-docker-excluder", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 2471, > "uid": 0, > "version": "1099280457", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": true > } >} >2018-06-12 17:07:41,076 p=5860 u=root | TASK [openshift_excluder : Enable docker excluder] ****************************************************************************************************************************************************************************************** >2018-06-12 17:07:41,076 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/exclude.yml:7 >2018-06-12 17:07:41,204 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:07:41,438 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": true, > "cmd": [ > "/sbin/atomic-openshift-docker-excluder", > "exclude" > ], > "delta": "0:00:00.029356", > "end": "2018-06-12 17:07:41.423375", > "failed": false, > "invocation": { > "module_args": { > "_raw_params": "/sbin/atomic-openshift-docker-excluder exclude", > "_uses_shell": false, > "chdir": null, > "creates": null, > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "rc": 0, > "start": "2018-06-12 17:07:41.394019", > "stderr": "", > "stderr_lines": [], > "stdout": "", > "stdout_lines": [] >} >2018-06-12 17:07:41,446 p=5860 u=root | TASK [openshift_excluder : Check for openshift excluder] ************************************************************************************************************************************************************************************ >2018-06-12 17:07:41,447 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/exclude.yml:13 >2018-06-12 17:07:41,568 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:41,782 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/sbin/atomic-openshift-excluder" > } > }, > "stat": { > "atime": 1528785393.5521164, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "333d3b466bd2eb1bdf4b729f7ecc6cfed169f854", > "ctime": 1528785393.525116, > "dev": 66307, > "device_type": 0, > "executable": true, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 757298, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "22497099e443cfda7b96a57e975a4176", > "mimetype": "text/x-shellscript", > "mode": "0744", > "mtime": 1528745090.0, > "nlink": 1, > "path": "/sbin/atomic-openshift-excluder", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 2603, > "uid": 0, > "version": "754120915", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": true > } >} >2018-06-12 17:07:41,790 p=5860 u=root | TASK [openshift_excluder : Enable openshift excluder] *************************************************************************************************************************************************************************************** >2018-06-12 17:07:41,791 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/exclude.yml:18 >2018-06-12 17:07:41,828 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:07:42,066 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": true, > "cmd": [ > "/sbin/atomic-openshift-excluder", > "exclude" > ], > "delta": "0:00:00.035931", > "end": "2018-06-12 17:07:42.051108", > "failed": false, > "invocation": { > "module_args": { > "_raw_params": "/sbin/atomic-openshift-excluder exclude", > "_uses_shell": false, > "chdir": null, > "creates": null, > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "rc": 0, > "start": "2018-06-12 17:07:42.015177", > "stderr": "", > "stderr_lines": [], > "stdout": "", > "stdout_lines": [] >} >2018-06-12 17:07:42,074 p=5860 u=root | TASK [openshift_excluder : Check for docker-excluder] *************************************************************************************************************************************************************************************** >2018-06-12 17:07:42,075 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml:6 >2018-06-12 17:07:42,104 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:42,316 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/sbin/atomic-openshift-docker-excluder" > } > }, > "stat": { > "atime": 1528785386.3050418, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "b9cd3139c5b201a8580659ad5faa8af5430fd079", > "ctime": 1528785386.2780416, > "dev": 66307, > "device_type": 0, > "executable": true, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 757286, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "83a4b3c216ec5f9f3a743ea3b6ffc554", > "mimetype": "text/x-shellscript", > "mode": "0744", > "mtime": 1528745090.0, > "nlink": 1, > "path": "/sbin/atomic-openshift-docker-excluder", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 2471, > "uid": 0, > "version": "1099280457", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": true > } >} >2018-06-12 17:07:42,325 p=5860 u=root | TASK [openshift_excluder : disable docker excluder] ***************************************************************************************************************************************************************************************** >2018-06-12 17:07:42,325 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml:11 >2018-06-12 17:07:42,339 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:42,347 p=5860 u=root | TASK [openshift_excluder : Check for openshift excluder] ************************************************************************************************************************************************************************************ >2018-06-12 17:07:42,347 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml:17 >2018-06-12 17:07:42,377 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:42,591 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/sbin/atomic-openshift-excluder" > } > }, > "stat": { > "atime": 1528785393.5521164, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "333d3b466bd2eb1bdf4b729f7ecc6cfed169f854", > "ctime": 1528785393.525116, > "dev": 66307, > "device_type": 0, > "executable": true, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 757298, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "22497099e443cfda7b96a57e975a4176", > "mimetype": "text/x-shellscript", > "mode": "0744", > "mtime": 1528745090.0, > "nlink": 1, > "path": "/sbin/atomic-openshift-excluder", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 2603, > "uid": 0, > "version": "754120915", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": true > } >} >2018-06-12 17:07:42,600 p=5860 u=root | TASK [openshift_excluder : disable openshift excluder] ************************************************************************************************************************************************************************************** >2018-06-12 17:07:42,600 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_excluder/tasks/unexclude.yml:22 >2018-06-12 17:07:42,732 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:07:42,974 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": true, > "cmd": [ > "/sbin/atomic-openshift-excluder", > "unexclude" > ], > "delta": "0:00:00.035456", > "end": "2018-06-12 17:07:42.959990", > "failed": false, > "invocation": { > "module_args": { > "_raw_params": "/sbin/atomic-openshift-excluder unexclude", > "_uses_shell": false, > "chdir": null, > "creates": null, > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "rc": 0, > "start": "2018-06-12 17:07:42.924534", > "stderr": "", > "stderr_lines": [], > "stdout": "", > "stdout_lines": [] >} >2018-06-12 17:07:42,982 p=5860 u=root | TASK [Check for RPM generated config marker file .config_managed] *************************************************************************************************************************************************************************** >2018-06-12 17:07:42,982 p=5860 u=root | task path: /root/openshift-ansible/playbooks/openshift-master/private/config.yml:29 >2018-06-12 17:07:43,100 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:43,304 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/.config_managed" > } > }, > "stat": { > "exists": false > } >} >2018-06-12 17:07:43,313 p=5860 u=root | TASK [Remove RPM generated config files if present] ***************************************************************************************************************************************************************************************** >2018-06-12 17:07:43,313 p=5860 u=root | task path: /root/openshift-ansible/playbooks/openshift-master/private/config.yml:34 >2018-06-12 17:07:43,330 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=master) => { > "changed": false, > "item": "master", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:43,333 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=node) => { > "changed": false, > "item": "node", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:43,337 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=.config_managed) => { > "changed": false, > "item": ".config_managed", > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:43,344 p=5860 u=root | TASK [openshift_facts] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:43,344 p=5860 u=root | task path: /root/openshift-ansible/playbooks/openshift-master/private/config.yml:46 >2018-06-12 17:07:43,474 p=5860 u=root | Using module file /root/openshift-ansible/roles/openshift_facts/library/openshift_facts.py >2018-06-12 17:07:44,009 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift": { > "builddefaults": { > "config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > } > } > }, > "buildoverrides": { > "config": { > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > } > } > }, > "cloudprovider": { > "kind": "aws" > }, > "common": { > "all_hostnames": [ > "kubernetes.default", > "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "54.186.168.249", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "config_base": "/etc/origin", > "dns_domain": "cluster.local", > "generate_no_proxy_hosts": true, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "internal_hostnames": [ > "kubernetes.default", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "ip": "172.31.50.118", > "kube_svc_ip": "172.24.0.1", > "no_proxy_etcd_host_ips": "172.31.50.118", > "portal_net": "172.24.0.0/14", > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249", > "raw_hostname": "ip-172-31-50-118.us-west-2.compute.internal" > }, > "current_config": { > "roles": [ > "node", > "builddefaults", > "cloudprovider", > "master", > "buildoverrides" > ] > }, > "master": { > "admission_plugin_config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > }, > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > }, > "openshift.io/ImagePolicy": { > "configuration": { > "apiVersion": "v1", > "executionRules": [ > { > "matchImageAnnotations": [ > { > "key": "images.openshift.io/deny-execution", > "value": "true" > } > ], > "name": "execution-denied", > "onResources": [ > { > "resource": "pods" > }, > { > "resource": "builds" > } > ], > "reject": true, > "skipOnResolutionFailure": true > } > ], > "kind": "ImagePolicyConfig" > } > } > }, > "api_port": "8443", > "api_server_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ] > }, > "api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "api_use_ssl": true, > "bind_addr": "0.0.0.0", > "console_path": "/console", > "console_port": "8443", > "console_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443/console", > "console_use_ssl": true, > "controller_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ], > "disable-attach-detach-reconcile-sync": [ > "true" > ] > }, > "controllers_port": "8444", > "loopback_api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "loopback_cluster_name": "ip-172-31-50-118-us-west-2-compute-internal:8443", > "loopback_context_name": "default/ip-172-31-50-118-us-west-2-compute-internal:8443/system:openshift-master", > "loopback_user": "system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443", > "named_certificates": [], > "portal_net": "172.30.0.0/16", > "public_api_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443", > "public_console_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443/console", > "registry_selector": "node-role.kubernetes.io/infra=true", > "registry_url": "registry.reg-aws.openshift.com:443/openshift3/ose-${component}:${version}", > "session_max_seconds": 3600, > "session_name": "ssn" > }, > "node": { > "bootstrapped": true, > "nodename": "ip-172-31-50-118.us-west-2.compute.internal", > "sdn_mtu": "8951" > }, > "provider": { > "metadata": { > "ami-id": "ami-f1064589", > "ami-launch-index": "0", > "ami-manifest-path": "(unknown)", > "block-device-mapping": { > "ami": "sda1", > "ebs1": "sdb", > "root": "/dev/sda1" > }, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "instance-action": "none", > "instance-id": "i-0936de393175df6ba", > "instance-type": "m5.xlarge", > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "metrics": { > "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" > }, > "network": { > "interfaces": { > "macs": { > "02:42:24:b2:8d:1a": { > "device-number": "0", > "interface-id": "eni-e0ac6b0a", > "ipv4-associations": { > "54.186.168.249": "172.31.50.118" > }, > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4s": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "owner-id": "925374498059", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4s": "54.186.168.249", > "security-group-ids": "sg-5c5ace38", > "security-groups": "default", > "subnet-id": "subnet-4879292d", > "subnet-ipv4-cidr-block": "172.31.0.0/18", > "vpc-id": "vpc-33b5f656", > "vpc-ipv4-cidr-block": "172.31.0.0/16", > "vpc-ipv4-cidr-blocks": "172.31.0.0/16" > } > } > } > }, > "placement": { > "availability-zone": "us-west-2b" > }, > "profile": "hvm", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4": "54.186.168.249", > "public-keys/": "0=libra", > "reservation-id": "r-09891556570a1d8a4", > "security-groups": "default", > "services": { > "domain": "amazonaws.com", > "partition": "aws" > } > }, > "name": "aws", > "network": { > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "interfaces": [ > { > "ips": [ > "172.31.50.118" > ], > "network_id": "subnet-4879292d", > "network_type": "vpc", > "public_ips": [ > "54.186.168.249" > ] > } > ], > "ip": "172.31.50.118", > "ipv6_enabled": false, > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249" > }, > "zone": "us-west-2b" > } > } > }, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "additive_facts_to_overwrite": [], > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "directory_mode": null, > "filter": "*", > "follow": false, > "force": null, > "gather_subset": [ > "hardware", > "network", > "virtual", > "facter" > ], > "gather_timeout": 10, > "group": null, > "local_facts": { > "api_port": "8443", > "api_url": "", > "api_use_ssl": "", > "cluster_hostname": "", > "cluster_public_hostname": "", > "console_path": "", > "console_port": "", > "console_url": "", > "console_use_ssl": "", > "controllers_port": "", > "public_api_url": "", > "public_console_url": "" > }, > "mode": null, > "owner": null, > "regexp": null, > "remote_src": null, > "role": "master", > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "unsafe_writes": null > } > } >} >2018-06-12 17:07:44,022 p=5860 u=root | TASK [openshift_facts] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:44,023 p=5860 u=root | task path: /root/openshift-ansible/playbooks/openshift-master/private/config.yml:61 >2018-06-12 17:07:44,050 p=5860 u=root | Using module file /root/openshift-ansible/roles/openshift_facts/library/openshift_facts.py >2018-06-12 17:07:44,582 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift": { > "builddefaults": { > "config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > } > } > }, > "buildoverrides": { > "config": { > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > } > } > }, > "cloudprovider": { > "kind": "aws" > }, > "common": { > "all_hostnames": [ > "kubernetes.default", > "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "54.186.168.249", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "config_base": "/etc/origin", > "dns_domain": "cluster.local", > "generate_no_proxy_hosts": true, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "internal_hostnames": [ > "kubernetes.default", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "ip": "172.31.50.118", > "kube_svc_ip": "172.24.0.1", > "no_proxy_etcd_host_ips": "172.31.50.118", > "portal_net": "172.24.0.0/14", > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249", > "raw_hostname": "ip-172-31-50-118.us-west-2.compute.internal" > }, > "current_config": { > "roles": [ > "node", > "builddefaults", > "cloudprovider", > "master", > "buildoverrides" > ] > }, > "master": { > "admission_plugin_config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > }, > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > }, > "openshift.io/ImagePolicy": { > "configuration": { > "apiVersion": "v1", > "executionRules": [ > { > "matchImageAnnotations": [ > { > "key": "images.openshift.io/deny-execution", > "value": "true" > } > ], > "name": "execution-denied", > "onResources": [ > { > "resource": "pods" > }, > { > "resource": "builds" > } > ], > "reject": true, > "skipOnResolutionFailure": true > } > ], > "kind": "ImagePolicyConfig" > } > } > }, > "api_port": "8443", > "api_server_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ] > }, > "api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "api_use_ssl": true, > "bind_addr": "0.0.0.0", > "console_path": "/console", > "console_port": "8443", > "console_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443/console", > "console_use_ssl": true, > "controller_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ], > "disable-attach-detach-reconcile-sync": [ > "true" > ] > }, > "controllers_port": "8444", > "loopback_api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "loopback_cluster_name": "ip-172-31-50-118-us-west-2-compute-internal:8443", > "loopback_context_name": "default/ip-172-31-50-118-us-west-2-compute-internal:8443/system:openshift-master", > "loopback_user": "system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443", > "named_certificates": [], > "portal_net": "172.30.0.0/16", > "public_api_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443", > "public_console_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443/console", > "registry_selector": "node-role.kubernetes.io/infra=true", > "registry_url": "registry.reg-aws.openshift.com:443/openshift3/ose-${component}:${version}", > "session_max_seconds": 3600, > "session_name": "ssn" > }, > "node": { > "bootstrapped": true, > "nodename": "ip-172-31-50-118.us-west-2.compute.internal", > "sdn_mtu": "8951" > }, > "provider": { > "metadata": { > "ami-id": "ami-f1064589", > "ami-launch-index": "0", > "ami-manifest-path": "(unknown)", > "block-device-mapping": { > "ami": "sda1", > "ebs1": "sdb", > "root": "/dev/sda1" > }, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "instance-action": "none", > "instance-id": "i-0936de393175df6ba", > "instance-type": "m5.xlarge", > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "metrics": { > "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" > }, > "network": { > "interfaces": { > "macs": { > "02:42:24:b2:8d:1a": { > "device-number": "0", > "interface-id": "eni-e0ac6b0a", > "ipv4-associations": { > "54.186.168.249": "172.31.50.118" > }, > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4s": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "owner-id": "925374498059", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4s": "54.186.168.249", > "security-group-ids": "sg-5c5ace38", > "security-groups": "default", > "subnet-id": "subnet-4879292d", > "subnet-ipv4-cidr-block": "172.31.0.0/18", > "vpc-id": "vpc-33b5f656", > "vpc-ipv4-cidr-block": "172.31.0.0/16", > "vpc-ipv4-cidr-blocks": "172.31.0.0/16" > } > } > } > }, > "placement": { > "availability-zone": "us-west-2b" > }, > "profile": "hvm", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4": "54.186.168.249", > "public-keys/": "0=libra", > "reservation-id": "r-09891556570a1d8a4", > "security-groups": "default", > "services": { > "domain": "amazonaws.com", > "partition": "aws" > } > }, > "name": "aws", > "network": { > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "interfaces": [ > { > "ips": [ > "172.31.50.118" > ], > "network_id": "subnet-4879292d", > "network_type": "vpc", > "public_ips": [ > "54.186.168.249" > ] > } > ], > "ip": "172.31.50.118", > "ipv6_enabled": false, > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249" > }, > "zone": "us-west-2b" > } > } > }, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "additive_facts_to_overwrite": [], > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "directory_mode": null, > "filter": "*", > "follow": false, > "force": null, > "gather_subset": [ > "hardware", > "network", > "virtual", > "facter" > ], > "gather_timeout": 10, > "group": null, > "local_facts": { > "bootstrapped": true > }, > "mode": null, > "owner": null, > "regexp": null, > "remote_src": null, > "role": "node", > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "unsafe_writes": null > } > } >} >2018-06-12 17:07:44,590 p=5860 u=root | META: ran handlers >2018-06-12 17:07:44,590 p=5860 u=root | META: ran handlers >2018-06-12 17:07:44,594 p=5860 u=root | PLAY [Generate or retrieve existing session secrets] **************************************************************************************************************************************************************************************** >2018-06-12 17:07:44,602 p=5860 u=root | TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:44,720 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:07:45,106 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] >2018-06-12 17:07:45,124 p=5860 u=root | META: ran handlers >2018-06-12 17:07:45,131 p=5860 u=root | TASK [openshift_control_plane : Determine if sessions secrets already in place] ************************************************************************************************************************************************************* >2018-06-12 17:07:45,131 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/generate_session_secrets.yml:5 >2018-06-12 17:07:45,247 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:45,465 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/session-secrets.yaml" > } > }, > "stat": { > "atime": 1528820644.6222153, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "606b2ebe2d092bc6f4defab5e704301ba82611cf", > "ctime": 1528820644.625215, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 88080795, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "58131fe33d0d0fa55cdd3e84ae975e24", > "mimetype": "text/plain", > "mode": "0600", > "mtime": 1528820643.6722155, > "nlink": 1, > "path": "/etc/origin/master/session-secrets.yaml", > "pw_name": "root", > "readable": true, > "rgrp": false, > "roth": false, > "rusr": true, > "size": 147, > "uid": 0, > "version": "18446744073082151654", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:45,474 p=5860 u=root | TASK [openshift_control_plane : slurp session secrets if defined] *************************************************************************************************************************************************************************** >2018-06-12 17:07:45,474 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/generate_session_secrets.yml:10 >2018-06-12 17:07:45,595 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/net_tools/basics/slurp.py >2018-06-12 17:07:45,801 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", > "changed": false, > "failed": false >} >2018-06-12 17:07:45,810 p=5860 u=root | TASK [openshift_control_plane : Gather existing session secrets from first master] ********************************************************************************************************************************************************** >2018-06-12 17:07:45,811 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/generate_session_secrets.yml:19 >2018-06-12 17:07:45,861 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "l_osm_session_auth_secrets": [ > "IIxr4yNhDb+vzt0X8KLiAcxw7TiYLR7j" > ], > "l_osm_session_encryption_secrets": [ > "IIxr4yNhDb+vzt0X8KLiAcxw7TiYLR7j" > ] > }, > "changed": false, > "failed": false >} >2018-06-12 17:07:45,869 p=5860 u=root | TASK [openshift_control_plane : setup session secrets if not defined] *********************************************************************************************************************************************************************** >2018-06-12 17:07:45,869 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/generate_session_secrets.yml:33 >2018-06-12 17:07:45,884 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:45,885 p=5860 u=root | META: ran handlers >2018-06-12 17:07:45,885 p=5860 u=root | META: ran handlers >2018-06-12 17:07:45,892 p=5860 u=root | PLAY [Configure masters] ******************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:45,923 p=5860 u=root | TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************** >2018-06-12 17:07:46,045 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py >2018-06-12 17:07:46,432 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] >2018-06-12 17:07:46,457 p=5860 u=root | TASK [openshift_node_group : create node config template] *********************************************************************************************************************************************************************************** >2018-06-12 17:07:46,457 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_node_group/tasks/bootstrap.yml:2 >2018-06-12 17:07:46,639 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:46,814 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:47,035 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "checksum": "b74c893023c875b58c48dc45cfd1207218537a6c", > "dest": "/etc/origin/node/bootstrap-node-config.yaml", > "diff": { > "after": { > "path": "/etc/origin/node/bootstrap-node-config.yaml" > }, > "before": { > "path": "/etc/origin/node/bootstrap-node-config.yaml" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "dest": "/etc/origin/node/bootstrap-node-config.yaml", > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": 384, > "original_basename": "node-config.yaml.j2", > "owner": null, > "path": "/etc/origin/node/bootstrap-node-config.yaml", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "node-config.yaml.j2", > "state": "file", > "unsafe_writes": null, > "validate": null > } > }, > "mode": "0600", > "owner": "root", > "path": "/etc/origin/node/bootstrap-node-config.yaml", > "secontext": "system_u:object_r:etc_t:s0", > "size": 1524, > "state": "file", > "uid": 0 >} >2018-06-12 17:07:47,044 p=5860 u=root | TASK [openshift_node_group : remove existing node config] *********************************************************************************************************************************************************************************** >2018-06-12 17:07:47,044 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_node_group/tasks/bootstrap.yml:8 >2018-06-12 17:07:47,077 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:47,286 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "dest": "/etc/origin/node/node-config.yaml", > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": null, > "original_basename": null, > "owner": null, > "path": "/etc/origin/node/node-config.yaml", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "absent", > "unsafe_writes": null, > "validate": null > } > }, > "path": "/etc/origin/node/node-config.yaml", > "state": "absent" >} >2018-06-12 17:07:47,294 p=5860 u=root | TASK [openshift_node_group : Ensure required directories are present] *********************************************************************************************************************************************************************** >2018-06-12 17:07:47,295 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_node_group/tasks/bootstrap_config.yml:2 >2018-06-12 17:07:47,325 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:47,557 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/origin/node/pods) => { > "changed": true, > "diff": { > "after": { > "mode": "0755", > "path": "/etc/origin/node/pods" > }, > "before": { > "mode": "0700", > "path": "/etc/origin/node/pods" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": "root", > "mode": 493, > "original_basename": null, > "owner": "root", > "path": "/etc/origin/node/pods", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "directory", > "unsafe_writes": null, > "validate": null > } > }, > "item": "/etc/origin/node/pods", > "mode": "0755", > "owner": "root", > "path": "/etc/origin/node/pods", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 68, > "state": "directory", > "uid": 0 >} >2018-06-12 17:07:47,558 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:47,760 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/origin/node/certificates) => { > "changed": false, > "diff": { > "after": { > "path": "/etc/origin/node/certificates" > }, > "before": { > "path": "/etc/origin/node/certificates" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": "root", > "mode": 493, > "original_basename": null, > "owner": "root", > "path": "/etc/origin/node/certificates", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "directory", > "unsafe_writes": null, > "validate": null > } > }, > "item": "/etc/origin/node/certificates", > "mode": "0755", > "owner": "root", > "path": "/etc/origin/node/certificates", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 86, > "state": "directory", > "uid": 0 >} >2018-06-12 17:07:47,771 p=5860 u=root | TASK [openshift_node_group : Update the sysconfig to group "node-config-master"] ************************************************************************************************************************************************************ >2018-06-12 17:07:47,771 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_node_group/tasks/bootstrap_config.yml:12 >2018-06-12 17:07:47,891 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/lineinfile.py >2018-06-12 17:07:48,094 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "backup": "", > "changed": false, > "diff": [ > { > "after": "", > "after_header": "/etc/sysconfig/atomic-openshift-node (content)", > "before": "", > "before_header": "/etc/sysconfig/atomic-openshift-node (content)" > }, > { > "after_header": "/etc/sysconfig/atomic-openshift-node (file attributes)", > "before_header": "/etc/sysconfig/atomic-openshift-node (file attributes)" > } > ], > "failed": false, > "invocation": { > "module_args": { > "attributes": null, > "backrefs": false, > "backup": false, > "content": null, > "create": false, > "delimiter": null, > "dest": "/etc/sysconfig/atomic-openshift-node", > "directory_mode": null, > "follow": false, > "force": null, > "group": null, > "insertafter": null, > "insertbefore": null, > "line": "BOOTSTRAP_CONFIG_NAME=node-config-master", > "mode": null, > "owner": null, > "path": "/etc/sysconfig/atomic-openshift-node", > "regexp": "^BOOTSTRAP_CONFIG_NAME=.*", > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "present", > "unsafe_writes": null, > "validate": null > } > }, > "msg": "" >} >2018-06-12 17:07:48,095 p=5860 u=root | META: ran handlers >2018-06-12 17:07:48,103 p=5860 u=root | TASK [openshift_master_facts : Verify required variables are set] *************************************************************************************************************************************************************************** >2018-06-12 17:07:48,104 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:2 >2018-06-12 17:07:48,119 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:48,127 p=5860 u=root | TASK [openshift_master_facts : Set g_metrics_hostname] ************************************************************************************************************************************************************************************** >2018-06-12 17:07:48,127 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:14 >2018-06-12 17:07:48,156 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "g_metrics_hostname": "hawkular-metrics.apps.0612-g-9.qe.rhcloud.com" > }, > "changed": false, > "failed": false >} >2018-06-12 17:07:48,165 p=5860 u=root | TASK [openshift_master_facts : set_fact] **************************************************************************************************************************************************************************************************** >2018-06-12 17:07:48,165 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:20 >2018-06-12 17:07:48,179 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:48,187 p=5860 u=root | TASK [openshift_master_facts : Set master facts] ******************************************************************************************************************************************************************************************** >2018-06-12 17:07:48,187 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:24 >2018-06-12 17:07:48,236 p=5860 u=root | Using module file /root/openshift-ansible/roles/openshift_facts/library/openshift_facts.py >2018-06-12 17:07:48,774 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift": { > "builddefaults": { > "config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > } > } > }, > "buildoverrides": { > "config": { > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > } > } > }, > "cloudprovider": { > "kind": "aws" > }, > "common": { > "all_hostnames": [ > "kubernetes.default", > "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "54.186.168.249", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "config_base": "/etc/origin", > "dns_domain": "cluster.local", > "generate_no_proxy_hosts": true, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "internal_hostnames": [ > "kubernetes.default", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "ip": "172.31.50.118", > "kube_svc_ip": "172.24.0.1", > "no_proxy_etcd_host_ips": "172.31.50.118", > "portal_net": "172.24.0.0/14", > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249", > "raw_hostname": "ip-172-31-50-118.us-west-2.compute.internal" > }, > "current_config": { > "roles": [ > "node", > "builddefaults", > "cloudprovider", > "master", > "buildoverrides" > ] > }, > "master": { > "admission_plugin_config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > }, > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > }, > "openshift.io/ImagePolicy": { > "configuration": { > "apiVersion": "v1", > "executionRules": [ > { > "matchImageAnnotations": [ > { > "key": "images.openshift.io/deny-execution", > "value": "true" > } > ], > "name": "execution-denied", > "onResources": [ > { > "resource": "pods" > }, > { > "resource": "builds" > } > ], > "reject": true, > "skipOnResolutionFailure": true > } > ], > "kind": "ImagePolicyConfig" > } > } > }, > "api_port": "8443", > "api_server_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ] > }, > "api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "api_use_ssl": true, > "bind_addr": "0.0.0.0", > "console_path": "/console", > "console_port": "8443", > "console_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443/console", > "console_use_ssl": true, > "controller_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ], > "disable-attach-detach-reconcile-sync": [ > "true" > ] > }, > "controllers_port": "8444", > "loopback_api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "loopback_cluster_name": "ip-172-31-50-118-us-west-2-compute-internal:8443", > "loopback_context_name": "default/ip-172-31-50-118-us-west-2-compute-internal:8443/system:openshift-master", > "loopback_user": "system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443", > "named_certificates": [], > "portal_net": "172.30.0.0/16", > "public_api_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443", > "public_console_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443/console", > "registry_selector": "node-role.kubernetes.io/infra=true", > "registry_url": "registry.reg-aws.openshift.com:443/openshift3/ose-${component}:${version}", > "session_max_seconds": 3600, > "session_name": "ssn" > }, > "node": { > "bootstrapped": true, > "nodename": "ip-172-31-50-118.us-west-2.compute.internal", > "sdn_mtu": "8951" > }, > "provider": { > "metadata": { > "ami-id": "ami-f1064589", > "ami-launch-index": "0", > "ami-manifest-path": "(unknown)", > "block-device-mapping": { > "ami": "sda1", > "ebs1": "sdb", > "root": "/dev/sda1" > }, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "instance-action": "none", > "instance-id": "i-0936de393175df6ba", > "instance-type": "m5.xlarge", > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "metrics": { > "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" > }, > "network": { > "interfaces": { > "macs": { > "02:42:24:b2:8d:1a": { > "device-number": "0", > "interface-id": "eni-e0ac6b0a", > "ipv4-associations": { > "54.186.168.249": "172.31.50.118" > }, > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4s": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "owner-id": "925374498059", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4s": "54.186.168.249", > "security-group-ids": "sg-5c5ace38", > "security-groups": "default", > "subnet-id": "subnet-4879292d", > "subnet-ipv4-cidr-block": "172.31.0.0/18", > "vpc-id": "vpc-33b5f656", > "vpc-ipv4-cidr-block": "172.31.0.0/16", > "vpc-ipv4-cidr-blocks": "172.31.0.0/16" > } > } > } > }, > "placement": { > "availability-zone": "us-west-2b" > }, > "profile": "hvm", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4": "54.186.168.249", > "public-keys/": "0=libra", > "reservation-id": "r-09891556570a1d8a4", > "security-groups": "default", > "services": { > "domain": "amazonaws.com", > "partition": "aws" > } > }, > "name": "aws", > "network": { > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "interfaces": [ > { > "ips": [ > "172.31.50.118" > ], > "network_id": "subnet-4879292d", > "network_type": "vpc", > "public_ips": [ > "54.186.168.249" > ] > } > ], > "ip": "172.31.50.118", > "ipv6_enabled": false, > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249" > }, > "zone": "us-west-2b" > } > } > }, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "additive_facts_to_overwrite": [], > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "directory_mode": null, > "filter": "*", > "follow": false, > "force": null, > "gather_subset": [ > "hardware", > "network", > "virtual", > "facter" > ], > "gather_timeout": 10, > "group": null, > "local_facts": { > "admission_plugin_config": { > "openshift.io/ImagePolicy": { > "configuration": { > "apiVersion": "v1", > "executionRules": [ > { > "matchImageAnnotations": [ > { > "key": "images.openshift.io/deny-execution", > "value": "true" > } > ], > "name": "execution-denied", > "onResources": [ > { > "resource": "pods" > }, > { > "resource": "builds" > } > ], > "reject": true, > "skipOnResolutionFailure": true > } > ], > "kind": "ImagePolicyConfig" > } > } > }, > "api_port": "8443", > "api_server_args": "", > "api_url": "", > "api_use_ssl": "", > "bind_addr": "", > "cluster_hostname": "", > "cluster_public_hostname": "", > "console_path": "", > "console_port": "", > "console_url": "", > "console_use_ssl": "", > "controller_args": "", > "disabled_features": "", > "image_policy_config": "", > "kube_admission_plugin_config": "", > "ldap_ca": "", > "logging_public_url": "", > "logout_url": "", > "openid_ca": "", > "public_api_url": "", > "public_console_url": "", > "registry_selector": "node-role.kubernetes.io/infra=true", > "registry_url": "registry.reg-aws.openshift.com:443/openshift3/ose-${component}:${version}", > "session_max_seconds": "", > "session_name": "" > }, > "mode": null, > "owner": null, > "regexp": null, > "remote_src": null, > "role": "master", > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "unsafe_writes": null > } > } >} >2018-06-12 17:07:48,787 p=5860 u=root | TASK [openshift_master_facts : Determine if scheduler config present] *********************************************************************************************************************************************************************** >2018-06-12 17:07:48,787 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:55 >2018-06-12 17:07:48,817 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:49,033 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "checksum_algorithm": "sha1", > "follow": false, > "get_attributes": true, > "get_checksum": true, > "get_md5": true, > "get_mime": true, > "path": "/etc/origin/master/scheduler.json" > } > }, > "stat": { > "atime": 1528823236.1373708, > "attr_flags": "", > "attributes": [], > "block_size": 4096, > "blocks": 8, > "charset": "us-ascii", > "checksum": "2953b4037c413a5acf89bf5ef868119b889f6fa4", > "ctime": 1528820640.9192166, > "dev": 66307, > "device_type": 0, > "executable": false, > "exists": true, > "gid": 0, > "gr_name": "root", > "inode": 71303256, > "isblk": false, > "ischr": false, > "isdir": false, > "isfifo": false, > "isgid": false, > "islnk": false, > "isreg": true, > "issock": false, > "isuid": false, > "md5": "7679d40705888fa1f567ec8d3ff89dad", > "mimetype": "text/plain", > "mode": "0644", > "mtime": 1528820640.170217, > "nlink": 1, > "path": "/etc/origin/master/scheduler.json", > "pw_name": "root", > "readable": true, > "rgrp": true, > "roth": true, > "rusr": true, > "size": 1923, > "uid": 0, > "version": "18446744072291978203", > "wgrp": false, > "woth": false, > "writeable": true, > "wusr": true, > "xgrp": false, > "xoth": false, > "xusr": false > } >} >2018-06-12 17:07:49,042 p=5860 u=root | TASK [openshift_master_facts : Set Default scheduler predicates and priorities] ************************************************************************************************************************************************************* >2018-06-12 17:07:49,042 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:60 >2018-06-12 17:07:49,079 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_master_scheduler_default_predicates": [ > { > "name": "NoVolumeZoneConflict" > }, > { > "name": "MaxEBSVolumeCount" > }, > { > "name": "MaxGCEPDVolumeCount" > }, > { > "name": "MaxAzureDiskVolumeCount" > }, > { > "name": "MatchInterPodAffinity" > }, > { > "name": "NoDiskConflict" > }, > { > "name": "GeneralPredicates" > }, > { > "name": "PodToleratesNodeTaints" > }, > { > "name": "CheckNodeMemoryPressure" > }, > { > "name": "CheckNodeDiskPressure" > }, > { > "name": "CheckVolumeBinding" > }, > { > "argument": { > "serviceAffinity": { > "labels": [ > "region" > ] > } > }, > "name": "Region" > } > ], > "openshift_master_scheduler_default_priorities": [ > { > "name": "SelectorSpreadPriority", > "weight": 1 > }, > { > "name": "InterPodAffinityPriority", > "weight": 1 > }, > { > "name": "LeastRequestedPriority", > "weight": 1 > }, > { > "name": "BalancedResourceAllocation", > "weight": 1 > }, > { > "name": "NodePreferAvoidPodsPriority", > "weight": 10000 > }, > { > "name": "NodeAffinityPriority", > "weight": 1 > }, > { > "name": "TaintTolerationPriority", > "weight": 1 > }, > { > "argument": { > "serviceAntiAffinity": { > "label": "zone" > } > }, > "name": "Zone", > "weight": 2 > } > ] > }, > "changed": false, > "failed": false >} >2018-06-12 17:07:49,088 p=5860 u=root | TASK [openshift_master_facts : Retrieve current scheduler config] *************************************************************************************************************************************************************************** >2018-06-12 17:07:49,089 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:68 >2018-06-12 17:07:49,121 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/net_tools/basics/slurp.py >2018-06-12 17:07:49,316 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "content": "ewogICAgImFwaVZlcnNpb24iOiAidjEiLCAKICAgICJraW5kIjogIlBvbGljeSIsIAogICAgInByZWRpY2F0ZXMiOiBbCiAgICAgICAgewogICAgICAgICAgICAibmFtZSI6ICJOb1ZvbHVtZVpvbmVDb25mbGljdCIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIk1heEVCU1ZvbHVtZUNvdW50IgogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgIm5hbWUiOiAiTWF4R0NFUERWb2x1bWVDb3VudCIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIk1heEF6dXJlRGlza1ZvbHVtZUNvdW50IgogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgIm5hbWUiOiAiTWF0Y2hJbnRlclBvZEFmZmluaXR5IgogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgIm5hbWUiOiAiTm9EaXNrQ29uZmxpY3QiCiAgICAgICAgfSwgCiAgICAgICAgewogICAgICAgICAgICAibmFtZSI6ICJHZW5lcmFsUHJlZGljYXRlcyIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIlBvZFRvbGVyYXRlc05vZGVUYWludHMiCiAgICAgICAgfSwgCiAgICAgICAgewogICAgICAgICAgICAibmFtZSI6ICJDaGVja05vZGVNZW1vcnlQcmVzc3VyZSIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIkNoZWNrTm9kZURpc2tQcmVzc3VyZSIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIkNoZWNrVm9sdW1lQmluZGluZyIKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJhcmd1bWVudCI6IHsKICAgICAgICAgICAgICAgICJzZXJ2aWNlQWZmaW5pdHkiOiB7CiAgICAgICAgICAgICAgICAgICAgImxhYmVscyI6IFsKICAgICAgICAgICAgICAgICAgICAgICAgInJlZ2lvbiIKICAgICAgICAgICAgICAgICAgICBdCiAgICAgICAgICAgICAgICB9CiAgICAgICAgICAgIH0sIAogICAgICAgICAgICAibmFtZSI6ICJSZWdpb24iCiAgICAgICAgfQogICAgXSwgCiAgICAicHJpb3JpdGllcyI6IFsKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIlNlbGVjdG9yU3ByZWFkUHJpb3JpdHkiLCAKICAgICAgICAgICAgIndlaWdodCI6IDEKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIkludGVyUG9kQWZmaW5pdHlQcmlvcml0eSIsIAogICAgICAgICAgICAid2VpZ2h0IjogMQogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgIm5hbWUiOiAiTGVhc3RSZXF1ZXN0ZWRQcmlvcml0eSIsIAogICAgICAgICAgICAid2VpZ2h0IjogMQogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgIm5hbWUiOiAiQmFsYW5jZWRSZXNvdXJjZUFsbG9jYXRpb24iLCAKICAgICAgICAgICAgIndlaWdodCI6IDEKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIk5vZGVQcmVmZXJBdm9pZFBvZHNQcmlvcml0eSIsIAogICAgICAgICAgICAid2VpZ2h0IjogMTAwMDAKICAgICAgICB9LCAKICAgICAgICB7CiAgICAgICAgICAgICJuYW1lIjogIk5vZGVBZmZpbml0eVByaW9yaXR5IiwgCiAgICAgICAgICAgICJ3ZWlnaHQiOiAxCiAgICAgICAgfSwgCiAgICAgICAgewogICAgICAgICAgICAibmFtZSI6ICJUYWludFRvbGVyYXRpb25Qcmlvcml0eSIsIAogICAgICAgICAgICAid2VpZ2h0IjogMQogICAgICAgIH0sIAogICAgICAgIHsKICAgICAgICAgICAgImFyZ3VtZW50IjogewogICAgICAgICAgICAgICAgInNlcnZpY2VBbnRpQWZmaW5pdHkiOiB7CiAgICAgICAgICAgICAgICAgICAgImxhYmVsIjogInpvbmUiCiAgICAgICAgICAgICAgICB9CiAgICAgICAgICAgIH0sIAogICAgICAgICAgICAibmFtZSI6ICJab25lIiwgCiAgICAgICAgICAgICJ3ZWlnaHQiOiAyCiAgICAgICAgfQogICAgXQp9", > "encoding": "base64", > "failed": false, > "invocation": { > "module_args": { > "src": "/etc/origin/master/scheduler.json" > } > }, > "source": "/etc/origin/master/scheduler.json" >} >2018-06-12 17:07:49,324 p=5860 u=root | TASK [openshift_master_facts : Set openshift_master_scheduler_current_config] *************************************************************************************************************************************************************** >2018-06-12 17:07:49,324 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:73 >2018-06-12 17:07:49,359 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_master_scheduler_current_config": { > "apiVersion": "v1", > "kind": "Policy", > "predicates": [ > { > "name": "NoVolumeZoneConflict" > }, > { > "name": "MaxEBSVolumeCount" > }, > { > "name": "MaxGCEPDVolumeCount" > }, > { > "name": "MaxAzureDiskVolumeCount" > }, > { > "name": "MatchInterPodAffinity" > }, > { > "name": "NoDiskConflict" > }, > { > "name": "GeneralPredicates" > }, > { > "name": "PodToleratesNodeTaints" > }, > { > "name": "CheckNodeMemoryPressure" > }, > { > "name": "CheckNodeDiskPressure" > }, > { > "name": "CheckVolumeBinding" > }, > { > "argument": { > "serviceAffinity": { > "labels": [ > "region" > ] > } > }, > "name": "Region" > } > ], > "priorities": [ > { > "name": "SelectorSpreadPriority", > "weight": 1 > }, > { > "name": "InterPodAffinityPriority", > "weight": 1 > }, > { > "name": "LeastRequestedPriority", > "weight": 1 > }, > { > "name": "BalancedResourceAllocation", > "weight": 1 > }, > { > "name": "NodePreferAvoidPodsPriority", > "weight": 10000 > }, > { > "name": "NodeAffinityPriority", > "weight": 1 > }, > { > "name": "TaintTolerationPriority", > "weight": 1 > }, > { > "argument": { > "serviceAntiAffinity": { > "label": "zone" > } > }, > "name": "Zone", > "weight": 2 > } > ] > } > }, > "changed": false, > "failed": false >} >2018-06-12 17:07:49,369 p=5860 u=root | TASK [openshift_master_facts : Test if scheduler config is readable] ************************************************************************************************************************************************************************ >2018-06-12 17:07:49,369 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:77 >2018-06-12 17:07:49,385 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:49,394 p=5860 u=root | TASK [openshift_master_facts : Set current scheduler predicates and priorities] ************************************************************************************************************************************************************* >2018-06-12 17:07:49,394 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_master_facts/tasks/main.yml:82 >2018-06-12 17:07:49,430 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift_master_scheduler_current_predicates": [ > { > "name": "NoVolumeZoneConflict" > }, > { > "name": "MaxEBSVolumeCount" > }, > { > "name": "MaxGCEPDVolumeCount" > }, > { > "name": "MaxAzureDiskVolumeCount" > }, > { > "name": "MatchInterPodAffinity" > }, > { > "name": "NoDiskConflict" > }, > { > "name": "GeneralPredicates" > }, > { > "name": "PodToleratesNodeTaints" > }, > { > "name": "CheckNodeMemoryPressure" > }, > { > "name": "CheckNodeDiskPressure" > }, > { > "name": "CheckVolumeBinding" > }, > { > "argument": { > "serviceAffinity": { > "labels": [ > "region" > ] > } > }, > "name": "Region" > } > ], > "openshift_master_scheduler_current_priorities": [ > { > "name": "SelectorSpreadPriority", > "weight": 1 > }, > { > "name": "InterPodAffinityPriority", > "weight": 1 > }, > { > "name": "LeastRequestedPriority", > "weight": 1 > }, > { > "name": "BalancedResourceAllocation", > "weight": 1 > }, > { > "name": "NodePreferAvoidPodsPriority", > "weight": 10000 > }, > { > "name": "NodeAffinityPriority", > "weight": 1 > }, > { > "name": "TaintTolerationPriority", > "weight": 1 > }, > { > "argument": { > "serviceAntiAffinity": { > "label": "zone" > } > }, > "name": "Zone", > "weight": 2 > } > ] > }, > "changed": false, > "failed": false >} >2018-06-12 17:07:49,439 p=5860 u=root | TASK [openshift_cloud_provider : Set cloud provider facts] ********************************************************************************************************************************************************************************** >2018-06-12 17:07:49,440 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_cloud_provider/tasks/main.yml:2 >2018-06-12 17:07:49,473 p=5860 u=root | Using module file /root/openshift-ansible/roles/openshift_facts/library/openshift_facts.py >2018-06-12 17:07:49,993 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift": { > "builddefaults": { > "config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > } > } > }, > "buildoverrides": { > "config": { > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > } > } > }, > "cloudprovider": { > "kind": "aws" > }, > "common": { > "all_hostnames": [ > "kubernetes.default", > "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "54.186.168.249", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "config_base": "/etc/origin", > "dns_domain": "cluster.local", > "generate_no_proxy_hosts": true, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "internal_hostnames": [ > "kubernetes.default", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "ip": "172.31.50.118", > "kube_svc_ip": "172.24.0.1", > "no_proxy_etcd_host_ips": "172.31.50.118", > "portal_net": "172.24.0.0/14", > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249", > "raw_hostname": "ip-172-31-50-118.us-west-2.compute.internal" > }, > "current_config": { > "roles": [ > "node", > "builddefaults", > "cloudprovider", > "master", > "buildoverrides" > ] > }, > "master": { > "admission_plugin_config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > }, > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > }, > "openshift.io/ImagePolicy": { > "configuration": { > "apiVersion": "v1", > "executionRules": [ > { > "matchImageAnnotations": [ > { > "key": "images.openshift.io/deny-execution", > "value": "true" > } > ], > "name": "execution-denied", > "onResources": [ > { > "resource": "pods" > }, > { > "resource": "builds" > } > ], > "reject": true, > "skipOnResolutionFailure": true > } > ], > "kind": "ImagePolicyConfig" > } > } > }, > "api_port": "8443", > "api_server_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ] > }, > "api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "api_use_ssl": true, > "bind_addr": "0.0.0.0", > "console_path": "/console", > "console_port": "8443", > "console_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443/console", > "console_use_ssl": true, > "controller_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ], > "disable-attach-detach-reconcile-sync": [ > "true" > ] > }, > "controllers_port": "8444", > "loopback_api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "loopback_cluster_name": "ip-172-31-50-118-us-west-2-compute-internal:8443", > "loopback_context_name": "default/ip-172-31-50-118-us-west-2-compute-internal:8443/system:openshift-master", > "loopback_user": "system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443", > "named_certificates": [], > "portal_net": "172.30.0.0/16", > "public_api_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443", > "public_console_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443/console", > "registry_selector": "node-role.kubernetes.io/infra=true", > "registry_url": "registry.reg-aws.openshift.com:443/openshift3/ose-${component}:${version}", > "session_max_seconds": 3600, > "session_name": "ssn" > }, > "node": { > "bootstrapped": true, > "nodename": "ip-172-31-50-118.us-west-2.compute.internal", > "sdn_mtu": "8951" > }, > "provider": { > "metadata": { > "ami-id": "ami-f1064589", > "ami-launch-index": "0", > "ami-manifest-path": "(unknown)", > "block-device-mapping": { > "ami": "sda1", > "ebs1": "sdb", > "root": "/dev/sda1" > }, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "instance-action": "none", > "instance-id": "i-0936de393175df6ba", > "instance-type": "m5.xlarge", > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "metrics": { > "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" > }, > "network": { > "interfaces": { > "macs": { > "02:42:24:b2:8d:1a": { > "device-number": "0", > "interface-id": "eni-e0ac6b0a", > "ipv4-associations": { > "54.186.168.249": "172.31.50.118" > }, > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4s": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "owner-id": "925374498059", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4s": "54.186.168.249", > "security-group-ids": "sg-5c5ace38", > "security-groups": "default", > "subnet-id": "subnet-4879292d", > "subnet-ipv4-cidr-block": "172.31.0.0/18", > "vpc-id": "vpc-33b5f656", > "vpc-ipv4-cidr-block": "172.31.0.0/16", > "vpc-ipv4-cidr-blocks": "172.31.0.0/16" > } > } > } > }, > "placement": { > "availability-zone": "us-west-2b" > }, > "profile": "hvm", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4": "54.186.168.249", > "public-keys/": "0=libra", > "reservation-id": "r-09891556570a1d8a4", > "security-groups": "default", > "services": { > "domain": "amazonaws.com", > "partition": "aws" > } > }, > "name": "aws", > "network": { > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "interfaces": [ > { > "ips": [ > "172.31.50.118" > ], > "network_id": "subnet-4879292d", > "network_type": "vpc", > "public_ips": [ > "54.186.168.249" > ] > } > ], > "ip": "172.31.50.118", > "ipv6_enabled": false, > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249" > }, > "zone": "us-west-2b" > } > } > }, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "additive_facts_to_overwrite": [], > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "directory_mode": null, > "filter": "*", > "follow": false, > "force": null, > "gather_subset": [ > "hardware", > "network", > "virtual", > "facter" > ], > "gather_timeout": 10, > "group": null, > "local_facts": { > "kind": "aws" > }, > "mode": null, > "owner": null, > "regexp": null, > "remote_src": null, > "role": "cloudprovider", > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "unsafe_writes": null > } > } >} >2018-06-12 17:07:50,007 p=5860 u=root | TASK [openshift_cloud_provider : Create cloudprovider config dir] *************************************************************************************************************************************************************************** >2018-06-12 17:07:50,007 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_cloud_provider/tasks/main.yml:8 >2018-06-12 17:07:50,038 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:50,244 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "diff": { > "after": { > "path": "/etc/origin/cloudprovider" > }, > "before": { > "path": "/etc/origin/cloudprovider" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": null, > "original_basename": null, > "owner": null, > "path": "/etc/origin/cloudprovider", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "directory", > "unsafe_writes": null, > "validate": null > } > }, > "mode": "0755", > "owner": "root", > "path": "/etc/origin/cloudprovider", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 22, > "state": "directory", > "uid": 0 >} >2018-06-12 17:07:50,253 p=5860 u=root | TASK [openshift_cloud_provider : include the defined cloud provider files] ****************************************************************************************************************************************************************** >2018-06-12 17:07:50,253 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_cloud_provider/tasks/main.yml:15 >2018-06-12 17:07:50,278 p=5860 u=root | included: /root/openshift-ansible/roles/openshift_cloud_provider/tasks/aws.yml for ec2-54-186-168-249.us-west-2.compute.amazonaws.com >2018-06-12 17:07:50,287 p=5860 u=root | TASK [openshift_cloud_provider : Create cloud config file] ********************************************************************************************************************************************************************************** >2018-06-12 17:07:50,287 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_cloud_provider/tasks/aws.yml:3 >2018-06-12 17:07:50,319 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:50,524 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "dest": "/etc/origin/cloudprovider/aws.conf", > "diff": { > "after": { > "path": "/etc/origin/cloudprovider/aws.conf", > "state": "touch" > }, > "before": { > "path": "/etc/origin/cloudprovider/aws.conf", > "state": "file" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "dest": "/etc/origin/cloudprovider/aws.conf", > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": "root", > "mode": 432, > "original_basename": null, > "owner": "root", > "path": "/etc/origin/cloudprovider/aws.conf", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "touch", > "unsafe_writes": null, > "validate": null > } > }, > "mode": "0660", > "owner": "root", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 28, > "state": "file", > "uid": 0 >} >2018-06-12 17:07:50,533 p=5860 u=root | TASK [openshift_cloud_provider : Configure AWS cloud provider] ****************************************************************************************************************************************************************************** >2018-06-12 17:07:50,534 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_cloud_provider/tasks/aws.yml:12 >2018-06-12 17:07:50,655 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/ini_file.py >2018-06-12 17:07:50,866 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "diff": { > "after": "", > "after_header": "/etc/origin/cloudprovider/aws.conf (content)", > "before": "", > "before_header": "/etc/origin/cloudprovider/aws.conf (content)" > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": false, > "content": null, > "create": true, > "delimiter": null, > "dest": "/etc/origin/cloudprovider/aws.conf", > "directory_mode": null, > "follow": false, > "force": null, > "group": null, > "mode": null, > "no_extra_spaces": false, > "option": "Zone", > "owner": null, > "path": "/etc/origin/cloudprovider/aws.conf", > "regexp": null, > "remote_src": null, > "section": "Global", > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "present", > "unsafe_writes": null, > "value": "us-west-2b" > } > }, > "mode": "0660", > "msg": "OK", > "owner": "root", > "path": "/etc/origin/cloudprovider/aws.conf", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 28, > "state": "file", > "uid": 0 >} >2018-06-12 17:07:50,875 p=5860 u=root | TASK [openshift_builddefaults : Set builddefaults] ****************************************************************************************************************************************************************************************** >2018-06-12 17:07:50,875 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_builddefaults/tasks/main.yml:2 >2018-06-12 17:07:50,910 p=5860 u=root | Using module file /root/openshift-ansible/roles/openshift_facts/library/openshift_facts.py >2018-06-12 17:07:51,444 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift": { > "builddefaults": {}, > "buildoverrides": { > "config": { > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > } > } > }, > "cloudprovider": { > "kind": "aws" > }, > "common": { > "all_hostnames": [ > "kubernetes.default", > "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "54.186.168.249", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "config_base": "/etc/origin", > "dns_domain": "cluster.local", > "generate_no_proxy_hosts": true, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "internal_hostnames": [ > "kubernetes.default", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "ip": "172.31.50.118", > "kube_svc_ip": "172.24.0.1", > "no_proxy_etcd_host_ips": "172.31.50.118", > "portal_net": "172.24.0.0/14", > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249", > "raw_hostname": "ip-172-31-50-118.us-west-2.compute.internal" > }, > "current_config": { > "roles": [ > "node", > "builddefaults", > "cloudprovider", > "master", > "buildoverrides" > ] > }, > "master": { > "admission_plugin_config": { > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > }, > "openshift.io/ImagePolicy": { > "configuration": { > "apiVersion": "v1", > "executionRules": [ > { > "matchImageAnnotations": [ > { > "key": "images.openshift.io/deny-execution", > "value": "true" > } > ], > "name": "execution-denied", > "onResources": [ > { > "resource": "pods" > }, > { > "resource": "builds" > } > ], > "reject": true, > "skipOnResolutionFailure": true > } > ], > "kind": "ImagePolicyConfig" > } > } > }, > "api_port": "8443", > "api_server_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ] > }, > "api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "api_use_ssl": true, > "bind_addr": "0.0.0.0", > "console_path": "/console", > "console_port": "8443", > "console_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443/console", > "console_use_ssl": true, > "controller_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ], > "disable-attach-detach-reconcile-sync": [ > "true" > ] > }, > "controllers_port": "8444", > "loopback_api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "loopback_cluster_name": "ip-172-31-50-118-us-west-2-compute-internal:8443", > "loopback_context_name": "default/ip-172-31-50-118-us-west-2-compute-internal:8443/system:openshift-master", > "loopback_user": "system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443", > "named_certificates": [], > "portal_net": "172.30.0.0/16", > "public_api_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443", > "public_console_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443/console", > "registry_selector": "node-role.kubernetes.io/infra=true", > "registry_url": "registry.reg-aws.openshift.com:443/openshift3/ose-${component}:${version}", > "session_max_seconds": 3600, > "session_name": "ssn" > }, > "node": { > "bootstrapped": true, > "nodename": "ip-172-31-50-118.us-west-2.compute.internal", > "sdn_mtu": "8951" > }, > "provider": { > "metadata": { > "ami-id": "ami-f1064589", > "ami-launch-index": "0", > "ami-manifest-path": "(unknown)", > "block-device-mapping": { > "ami": "sda1", > "ebs1": "sdb", > "root": "/dev/sda1" > }, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "instance-action": "none", > "instance-id": "i-0936de393175df6ba", > "instance-type": "m5.xlarge", > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "metrics": { > "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" > }, > "network": { > "interfaces": { > "macs": { > "02:42:24:b2:8d:1a": { > "device-number": "0", > "interface-id": "eni-e0ac6b0a", > "ipv4-associations": { > "54.186.168.249": "172.31.50.118" > }, > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4s": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "owner-id": "925374498059", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4s": "54.186.168.249", > "security-group-ids": "sg-5c5ace38", > "security-groups": "default", > "subnet-id": "subnet-4879292d", > "subnet-ipv4-cidr-block": "172.31.0.0/18", > "vpc-id": "vpc-33b5f656", > "vpc-ipv4-cidr-block": "172.31.0.0/16", > "vpc-ipv4-cidr-blocks": "172.31.0.0/16" > } > } > } > }, > "placement": { > "availability-zone": "us-west-2b" > }, > "profile": "hvm", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4": "54.186.168.249", > "public-keys/": "0=libra", > "reservation-id": "r-09891556570a1d8a4", > "security-groups": "default", > "services": { > "domain": "amazonaws.com", > "partition": "aws" > } > }, > "name": "aws", > "network": { > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "interfaces": [ > { > "ips": [ > "172.31.50.118" > ], > "network_id": "subnet-4879292d", > "network_type": "vpc", > "public_ips": [ > "54.186.168.249" > ] > } > ], > "ip": "172.31.50.118", > "ipv6_enabled": false, > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249" > }, > "zone": "us-west-2b" > } > } > }, > "changed": true, > "failed": false, > "invocation": { > "module_args": { > "additive_facts_to_overwrite": [], > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "directory_mode": null, > "filter": "*", > "follow": false, > "force": null, > "gather_subset": [ > "hardware", > "network", > "virtual", > "facter" > ], > "gather_timeout": 10, > "group": null, > "local_facts": { > "git_http_proxy": "", > "git_https_proxy": "", > "git_no_proxy": "", > "http_proxy": "", > "https_proxy": "", > "no_proxy": "" > }, > "mode": null, > "owner": null, > "regexp": null, > "remote_src": null, > "role": "builddefaults", > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "unsafe_writes": null > } > } >} >2018-06-12 17:07:51,457 p=5860 u=root | TASK [openshift_builddefaults : Set builddefaults config structure] ************************************************************************************************************************************************************************* >2018-06-12 17:07:51,457 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_builddefaults/tasks/main.yml:15 >2018-06-12 17:07:51,501 p=5860 u=root | Using module file /root/openshift-ansible/roles/openshift_facts/library/openshift_facts.py >2018-06-12 17:07:53,043 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift": { > "builddefaults": { > "config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > } > } > }, > "buildoverrides": { > "config": { > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > } > } > }, > "cloudprovider": { > "kind": "aws" > }, > "common": { > "all_hostnames": [ > "kubernetes.default", > "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "54.186.168.249", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "config_base": "/etc/origin", > "dns_domain": "cluster.local", > "generate_no_proxy_hosts": true, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "internal_hostnames": [ > "kubernetes.default", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "ip": "172.31.50.118", > "kube_svc_ip": "172.24.0.1", > "no_proxy_etcd_host_ips": "172.31.50.118", > "portal_net": "172.24.0.0/14", > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249", > "raw_hostname": "ip-172-31-50-118.us-west-2.compute.internal" > }, > "current_config": { > "roles": [ > "node", > "builddefaults", > "cloudprovider", > "master", > "buildoverrides" > ] > }, > "master": { > "admission_plugin_config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > }, > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > }, > "openshift.io/ImagePolicy": { > "configuration": { > "apiVersion": "v1", > "executionRules": [ > { > "matchImageAnnotations": [ > { > "key": "images.openshift.io/deny-execution", > "value": "true" > } > ], > "name": "execution-denied", > "onResources": [ > { > "resource": "pods" > }, > { > "resource": "builds" > } > ], > "reject": true, > "skipOnResolutionFailure": true > } > ], > "kind": "ImagePolicyConfig" > } > } > }, > "api_port": "8443", > "api_server_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ] > }, > "api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "api_use_ssl": true, > "bind_addr": "0.0.0.0", > "console_path": "/console", > "console_port": "8443", > "console_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443/console", > "console_use_ssl": true, > "controller_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ], > "disable-attach-detach-reconcile-sync": [ > "true" > ] > }, > "controllers_port": "8444", > "loopback_api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "loopback_cluster_name": "ip-172-31-50-118-us-west-2-compute-internal:8443", > "loopback_context_name": "default/ip-172-31-50-118-us-west-2-compute-internal:8443/system:openshift-master", > "loopback_user": "system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443", > "named_certificates": [], > "portal_net": "172.30.0.0/16", > "public_api_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443", > "public_console_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443/console", > "registry_selector": "node-role.kubernetes.io/infra=true", > "registry_url": "registry.reg-aws.openshift.com:443/openshift3/ose-${component}:${version}", > "session_max_seconds": 3600, > "session_name": "ssn" > }, > "node": { > "bootstrapped": true, > "nodename": "ip-172-31-50-118.us-west-2.compute.internal", > "sdn_mtu": "8951" > }, > "provider": { > "metadata": { > "ami-id": "ami-f1064589", > "ami-launch-index": "0", > "ami-manifest-path": "(unknown)", > "block-device-mapping": { > "ami": "sda1", > "ebs1": "sdb", > "root": "/dev/sda1" > }, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "instance-action": "none", > "instance-id": "i-0936de393175df6ba", > "instance-type": "m5.xlarge", > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "metrics": { > "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" > }, > "network": { > "interfaces": { > "macs": { > "02:42:24:b2:8d:1a": { > "device-number": "0", > "interface-id": "eni-e0ac6b0a", > "ipv4-associations": { > "54.186.168.249": "172.31.50.118" > }, > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4s": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "owner-id": "925374498059", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4s": "54.186.168.249", > "security-group-ids": "sg-5c5ace38", > "security-groups": "default", > "subnet-id": "subnet-4879292d", > "subnet-ipv4-cidr-block": "172.31.0.0/18", > "vpc-id": "vpc-33b5f656", > "vpc-ipv4-cidr-block": "172.31.0.0/16", > "vpc-ipv4-cidr-blocks": "172.31.0.0/16" > } > } > } > }, > "placement": { > "availability-zone": "us-west-2b" > }, > "profile": "hvm", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4": "54.186.168.249", > "public-keys/": "0=libra", > "reservation-id": "r-09891556570a1d8a4", > "security-groups": "default", > "services": { > "domain": "amazonaws.com", > "partition": "aws" > } > }, > "name": "aws", > "network": { > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "interfaces": [ > { > "ips": [ > "172.31.50.118" > ], > "network_id": "subnet-4879292d", > "network_type": "vpc", > "public_ips": [ > "54.186.168.249" > ] > } > ], > "ip": "172.31.50.118", > "ipv6_enabled": false, > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249" > }, > "zone": "us-west-2b" > } > } > }, > "changed": true, > "failed": false, > "invocation": { > "module_args": { > "additive_facts_to_overwrite": [], > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "directory_mode": null, > "filter": "*", > "follow": false, > "force": null, > "gather_subset": [ > "hardware", > "network", > "virtual", > "facter" > ], > "gather_timeout": 10, > "group": null, > "local_facts": { > "config": { > "BuildDefaults": { > "configuration": { > "annotations": "", > "apiVersion": "v1", > "env": [ > { > "name": "HTTP_PROXY", > "value": "" > }, > { > "name": "HTTPS_PROXY", > "value": "" > }, > { > "name": "NO_PROXY", > "value": "" > }, > { > "name": "http_proxy", > "value": "" > }, > { > "name": "https_proxy", > "value": "" > }, > { > "name": "no_proxy", > "value": "" > } > ], > "gitHTTPProxy": "", > "gitHTTPSProxy": "", > "gitNoProxy": "", > "imageLabels": "", > "kind": "BuildDefaultsConfig", > "nodeSelector": "", > "resources": { > "limits": { > "cpu": "", > "memory": "" > }, > "requests": { > "cpu": "", > "memory": "" > } > } > } > } > } > }, > "mode": null, > "owner": null, > "regexp": null, > "remote_src": null, > "role": "builddefaults", > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "unsafe_writes": null > } > } >} >2018-06-12 17:07:53,057 p=5860 u=root | TASK [openshift_buildoverrides : Set buildoverrides config structure] *********************************************************************************************************************************************************************** >2018-06-12 17:07:53,057 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_buildoverrides/tasks/main.yml:2 >2018-06-12 17:07:53,094 p=5860 u=root | Using module file /root/openshift-ansible/roles/openshift_facts/library/openshift_facts.py >2018-06-12 17:07:53,613 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift": { > "builddefaults": { > "config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > } > } > }, > "buildoverrides": { > "config": { > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > } > } > }, > "cloudprovider": { > "kind": "aws" > }, > "common": { > "all_hostnames": [ > "kubernetes.default", > "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "54.186.168.249", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "config_base": "/etc/origin", > "dns_domain": "cluster.local", > "generate_no_proxy_hosts": true, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "internal_hostnames": [ > "kubernetes.default", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "ip": "172.31.50.118", > "kube_svc_ip": "172.24.0.1", > "no_proxy_etcd_host_ips": "172.31.50.118", > "portal_net": "172.24.0.0/14", > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249", > "raw_hostname": "ip-172-31-50-118.us-west-2.compute.internal" > }, > "current_config": { > "roles": [ > "node", > "builddefaults", > "cloudprovider", > "master", > "buildoverrides" > ] > }, > "master": { > "admission_plugin_config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > }, > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > }, > "openshift.io/ImagePolicy": { > "configuration": { > "apiVersion": "v1", > "executionRules": [ > { > "matchImageAnnotations": [ > { > "key": "images.openshift.io/deny-execution", > "value": "true" > } > ], > "name": "execution-denied", > "onResources": [ > { > "resource": "pods" > }, > { > "resource": "builds" > } > ], > "reject": true, > "skipOnResolutionFailure": true > } > ], > "kind": "ImagePolicyConfig" > } > } > }, > "api_port": "8443", > "api_server_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ] > }, > "api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "api_use_ssl": true, > "bind_addr": "0.0.0.0", > "console_path": "/console", > "console_port": "8443", > "console_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443/console", > "console_use_ssl": true, > "controller_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ], > "disable-attach-detach-reconcile-sync": [ > "true" > ] > }, > "controllers_port": "8444", > "loopback_api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "loopback_cluster_name": "ip-172-31-50-118-us-west-2-compute-internal:8443", > "loopback_context_name": "default/ip-172-31-50-118-us-west-2-compute-internal:8443/system:openshift-master", > "loopback_user": "system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443", > "named_certificates": [], > "portal_net": "172.30.0.0/16", > "public_api_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443", > "public_console_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443/console", > "registry_selector": "node-role.kubernetes.io/infra=true", > "registry_url": "registry.reg-aws.openshift.com:443/openshift3/ose-${component}:${version}", > "session_max_seconds": 3600, > "session_name": "ssn" > }, > "node": { > "bootstrapped": true, > "nodename": "ip-172-31-50-118.us-west-2.compute.internal", > "sdn_mtu": "8951" > }, > "provider": { > "metadata": { > "ami-id": "ami-f1064589", > "ami-launch-index": "0", > "ami-manifest-path": "(unknown)", > "block-device-mapping": { > "ami": "sda1", > "ebs1": "sdb", > "root": "/dev/sda1" > }, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "instance-action": "none", > "instance-id": "i-0936de393175df6ba", > "instance-type": "m5.xlarge", > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "metrics": { > "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" > }, > "network": { > "interfaces": { > "macs": { > "02:42:24:b2:8d:1a": { > "device-number": "0", > "interface-id": "eni-e0ac6b0a", > "ipv4-associations": { > "54.186.168.249": "172.31.50.118" > }, > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4s": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "owner-id": "925374498059", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4s": "54.186.168.249", > "security-group-ids": "sg-5c5ace38", > "security-groups": "default", > "subnet-id": "subnet-4879292d", > "subnet-ipv4-cidr-block": "172.31.0.0/18", > "vpc-id": "vpc-33b5f656", > "vpc-ipv4-cidr-block": "172.31.0.0/16", > "vpc-ipv4-cidr-blocks": "172.31.0.0/16" > } > } > } > }, > "placement": { > "availability-zone": "us-west-2b" > }, > "profile": "hvm", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4": "54.186.168.249", > "public-keys/": "0=libra", > "reservation-id": "r-09891556570a1d8a4", > "security-groups": "default", > "services": { > "domain": "amazonaws.com", > "partition": "aws" > } > }, > "name": "aws", > "network": { > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "interfaces": [ > { > "ips": [ > "172.31.50.118" > ], > "network_id": "subnet-4879292d", > "network_type": "vpc", > "public_ips": [ > "54.186.168.249" > ] > } > ], > "ip": "172.31.50.118", > "ipv6_enabled": false, > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249" > }, > "zone": "us-west-2b" > } > } > }, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "additive_facts_to_overwrite": [], > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "directory_mode": null, > "filter": "*", > "follow": false, > "force": null, > "gather_subset": [ > "hardware", > "network", > "virtual", > "facter" > ], > "gather_timeout": 10, > "group": null, > "local_facts": { > "config": { > "BuildOverrides": { > "configuration": { > "annotations": "", > "apiVersion": "v1", > "forcePull": "", > "imageLabels": "", > "kind": "BuildOverridesConfig", > "nodeSelector": "", > "tolerations": "" > } > } > } > }, > "mode": null, > "owner": null, > "regexp": null, > "remote_src": null, > "role": "buildoverrides", > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "unsafe_writes": null > } > } >} >2018-06-12 17:07:53,627 p=5860 u=root | TASK [nickhammond.logrotate : nickhammond.logrotate | Install logrotate] ******************************************************************************************************************************************************************** >2018-06-12 17:07:53,627 p=5860 u=root | task path: /root/openshift-ansible/roles/nickhammond.logrotate/tasks/main.yml:2 >2018-06-12 17:07:53,660 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/yum.py >2018-06-12 17:07:54,058 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "attempts": 1, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "allow_downgrade": false, > "conf_file": null, > "disable_gpg_check": false, > "disablerepo": null, > "enablerepo": null, > "exclude": null, > "install_repoquery": true, > "installroot": "/", > "list": null, > "name": [ > "logrotate" > ], > "security": false, > "skip_broken": false, > "state": "present", > "update_cache": false, > "validate_certs": true > } > }, > "msg": "", > "rc": 0, > "results": [ > "logrotate-3.8.6-15.el7.x86_64 providing logrotate is already installed" > ] >} >2018-06-12 17:07:54,066 p=5860 u=root | TASK [nickhammond.logrotate : nickhammond.logrotate | Setup logrotate.d scripts] ************************************************************************************************************************************************************ >2018-06-12 17:07:54,067 p=5860 u=root | task path: /root/openshift-ansible/roles/nickhammond.logrotate/tasks/main.yml:8 >2018-06-12 17:07:54,088 p=5860 u=root | TASK [openshift_control_plane : fail] ******************************************************************************************************************************************************************************************************* >2018-06-12 17:07:54,088 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:7 >2018-06-12 17:07:54,105 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:54,113 p=5860 u=root | TASK [openshift_control_plane : Check that origin image is present] ************************************************************************************************************************************************************************* >2018-06-12 17:07:54,114 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:13 >2018-06-12 17:07:54,147 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:07:54,375 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": true, > "cmd": [ > "docker", > "images", > "-q", > "registry.reg-aws.openshift.com:443/openshift3/ose-control-plane:v3.10.0" > ], > "delta": "0:00:00.021537", > "end": "2018-06-12 17:07:54.359350", > "failed": false, > "invocation": { > "module_args": { > "_raw_params": "docker images -q \"registry.reg-aws.openshift.com:443/openshift3/ose-control-plane:v3.10.0\"", > "_uses_shell": false, > "chdir": null, > "creates": null, > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "rc": 0, > "start": "2018-06-12 17:07:54.337813", > "stderr": "", > "stderr_lines": [], > "stdout": "377c1855dd3b", > "stdout_lines": [ > "377c1855dd3b" > ] >} >2018-06-12 17:07:54,384 p=5860 u=root | TASK [openshift_control_plane : Pre-pull Origin image] ************************************************************************************************************************************************************************************** >2018-06-12 17:07:54,384 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:18 >2018-06-12 17:07:54,398 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:54,407 p=5860 u=root | TASK [openshift_control_plane : Add iptables allow rules] *********************************************************************************************************************************************************************************** >2018-06-12 17:07:54,407 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/firewall.yml:4 >2018-06-12 17:07:54,457 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_utils/library/os_firewall_manage_iptables.py >2018-06-12 17:07:54,668 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={u'port': u'8443/tcp', u'service': u'api server https'}) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "action": "add", > "chain": "OS_FIREWALL_ALLOW", > "create_jump_rule": true, > "ip_version": "ipv4", > "jump_rule_chain": "INPUT", > "name": "api server https", > "port": "8443", > "protocol": "tcp" > } > }, > "item": { > "port": "8443/tcp", > "service": "api server https" > }, > "output": [] >} >2018-06-12 17:07:54,694 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_utils/library/os_firewall_manage_iptables.py >2018-06-12 17:07:54,906 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={u'port': u'8444/tcp', u'service': u'api controllers https'}) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "action": "add", > "chain": "OS_FIREWALL_ALLOW", > "create_jump_rule": true, > "ip_version": "ipv4", > "jump_rule_chain": "INPUT", > "name": "api controllers https", > "port": "8444", > "protocol": "tcp" > } > }, > "item": { > "port": "8444/tcp", > "service": "api controllers https" > }, > "output": [] >} >2018-06-12 17:07:54,930 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_utils/library/os_firewall_manage_iptables.py >2018-06-12 17:07:55,140 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={u'port': u'8053/tcp', u'service': u'skydns tcp'}) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "action": "add", > "chain": "OS_FIREWALL_ALLOW", > "create_jump_rule": true, > "ip_version": "ipv4", > "jump_rule_chain": "INPUT", > "name": "skydns tcp", > "port": "8053", > "protocol": "tcp" > } > }, > "item": { > "port": "8053/tcp", > "service": "skydns tcp" > }, > "output": [] >} >2018-06-12 17:07:55,164 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_utils/library/os_firewall_manage_iptables.py >2018-06-12 17:07:55,368 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={u'port': u'8053/udp', u'service': u'skydns udp'}) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "action": "add", > "chain": "OS_FIREWALL_ALLOW", > "create_jump_rule": true, > "ip_version": "ipv4", > "jump_rule_chain": "INPUT", > "name": "skydns udp", > "port": "8053", > "protocol": "udp" > } > }, > "item": { > "port": "8053/udp", > "service": "skydns udp" > }, > "output": [] >} >2018-06-12 17:07:55,379 p=5860 u=root | TASK [openshift_control_plane : Remove iptables rules] ************************************************************************************************************************************************************************************** >2018-06-12 17:07:55,379 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/firewall.yml:14 >2018-06-12 17:07:55,401 p=5860 u=root | TASK [openshift_control_plane : Add firewalld allow rules] ********************************************************************************************************************************************************************************** >2018-06-12 17:07:55,401 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/firewall.yml:26 >2018-06-12 17:07:55,430 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={u'port': u'8443/tcp', u'service': u'api server https'}) => { > "changed": false, > "item": { > "port": "8443/tcp", > "service": "api server https" > }, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:55,438 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={u'port': u'8444/tcp', u'service': u'api controllers https'}) => { > "changed": false, > "item": { > "port": "8444/tcp", > "service": "api controllers https" > }, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:55,447 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={u'port': u'8053/tcp', u'service': u'skydns tcp'}) => { > "changed": false, > "item": { > "port": "8053/tcp", > "service": "skydns tcp" > }, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:55,455 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={u'port': u'8053/udp', u'service': u'skydns udp'}) => { > "changed": false, > "item": { > "port": "8053/udp", > "service": "skydns udp" > }, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:55,465 p=5860 u=root | TASK [openshift_control_plane : Remove firewalld allow rules] ******************************************************************************************************************************************************************************* >2018-06-12 17:07:55,465 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/firewall.yml:36 >2018-06-12 17:07:55,488 p=5860 u=root | TASK [openshift_control_plane : Copy static master scripts] ********************************************************************************************************************************************************************************* >2018-06-12 17:07:55,488 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/static_shim.yml:3 >2018-06-12 17:07:55,569 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:55,732 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:55,898 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=scripts/docker/master-exec) => { > "changed": false, > "checksum": "e078962e4e6a8f78db166cabd4e3997cefd1e848", > "dest": "/usr/local/bin/master-exec", > "diff": { > "after": { > "path": "/usr/local/bin/master-exec" > }, > "before": { > "path": "/usr/local/bin/master-exec" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "dest": "/usr/local/bin/", > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": 320, > "original_basename": "master-exec", > "owner": null, > "path": "/usr/local/bin/master-exec", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "master-exec", > "state": "file", > "unsafe_writes": null, > "validate": null > } > }, > "item": "scripts/docker/master-exec", > "mode": "0500", > "owner": "root", > "path": "/usr/local/bin/master-exec", > "secontext": "system_u:object_r:bin_t:s0", > "size": 1100, > "state": "file", > "uid": 0 >} >2018-06-12 17:07:55,955 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:56,114 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:56,281 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=scripts/docker/master-logs) => { > "changed": false, > "checksum": "b70b0e70e837bd2e240ee5a6184ca5ae289fb8d9", > "dest": "/usr/local/bin/master-logs", > "diff": { > "after": { > "path": "/usr/local/bin/master-logs" > }, > "before": { > "path": "/usr/local/bin/master-logs" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "dest": "/usr/local/bin/", > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": 320, > "original_basename": "master-logs", > "owner": null, > "path": "/usr/local/bin/master-logs", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "master-logs", > "state": "file", > "unsafe_writes": null, > "validate": null > } > }, > "item": "scripts/docker/master-logs", > "mode": "0500", > "owner": "root", > "path": "/usr/local/bin/master-logs", > "secontext": "system_u:object_r:bin_t:s0", > "size": 1112, > "state": "file", > "uid": 0 >} >2018-06-12 17:07:56,338 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:56,502 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:56,669 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=scripts/docker/master-restart) => { > "changed": false, > "checksum": "efc4960858199e78fed6bc8c2779d4d2d3bb4f11", > "dest": "/usr/local/bin/master-restart", > "diff": { > "after": { > "path": "/usr/local/bin/master-restart" > }, > "before": { > "path": "/usr/local/bin/master-restart" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "dest": "/usr/local/bin/", > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": 320, > "original_basename": "master-restart", > "owner": null, > "path": "/usr/local/bin/master-restart", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "master-restart", > "state": "file", > "unsafe_writes": null, > "validate": null > } > }, > "item": "scripts/docker/master-restart", > "mode": "0500", > "owner": "root", > "path": "/usr/local/bin/master-restart", > "secontext": "system_u:object_r:bin_t:s0", > "size": 1094, > "state": "file", > "uid": 0 >} >2018-06-12 17:07:56,680 p=5860 u=root | TASK [openshift_control_plane : Ensure cri-tools installed] ********************************************************************************************************************************************************************************* >2018-06-12 17:07:56,681 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/static_shim.yml:15 >2018-06-12 17:07:56,696 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:56,705 p=5860 u=root | TASK [openshift_control_plane : Create r_openshift_master_data_dir] ************************************************************************************************************************************************************************* >2018-06-12 17:07:56,705 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:35 >2018-06-12 17:07:56,736 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:56,950 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "diff": { > "after": { > "path": "/var/lib/origin" > }, > "before": { > "path": "/var/lib/origin" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": "root", > "mode": 493, > "original_basename": null, > "owner": "root", > "path": "/var/lib/origin", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "directory", > "unsafe_writes": null, > "validate": null > } > }, > "mode": "0755", > "owner": "root", > "path": "/var/lib/origin", > "secontext": "system_u:object_r:var_lib_t:s0", > "size": 82, > "state": "directory", > "uid": 0 >} >2018-06-12 17:07:56,959 p=5860 u=root | TASK [openshift_control_plane : Create config parent directory if it does not exist] ******************************************************************************************************************************************************** >2018-06-12 17:07:56,959 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:43 >2018-06-12 17:07:56,987 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:57,198 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "diff": { > "after": { > "path": "/etc/origin/master" > }, > "before": { > "path": "/etc/origin/master" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": null, > "original_basename": null, > "owner": null, > "path": "/etc/origin/master", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "directory", > "unsafe_writes": null, > "validate": null > } > }, > "mode": "0700", > "owner": "root", > "path": "/etc/origin/master", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 4096, > "state": "directory", > "uid": 0 >} >2018-06-12 17:07:57,207 p=5860 u=root | TASK [openshift_control_plane : Create the policy file if it does not already exist] ******************************************************************************************************************************************************** >2018-06-12 17:07:57,207 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:48 >2018-06-12 17:07:57,238 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:07:57,447 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "cmd": "oc adm create-bootstrap-policy-file\n --filename=/etc/origin/master/policy.json", > "failed": false, > "invocation": { > "module_args": { > "_raw_params": "oc adm create-bootstrap-policy-file\n --filename=/etc/origin/master/policy.json", > "_uses_shell": false, > "chdir": null, > "creates": "/etc/origin/master/policy.json", > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "rc": 0, > "stdout": "skipped, since /etc/origin/master/policy.json exists", > "stdout_lines": [ > "skipped, since /etc/origin/master/policy.json exists" > ] >} >2018-06-12 17:07:57,458 p=5860 u=root | TASK [openshift_control_plane : Create the scheduler config] ******************************************************************************************************************************************************************************** >2018-06-12 17:07:57,458 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:55 >2018-06-12 17:07:57,545 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:57,718 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:57,887 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "checksum": "2953b4037c413a5acf89bf5ef868119b889f6fa4", > "dest": "/etc/origin/master/scheduler.json", > "diff": { > "after": { > "path": "/etc/origin/master/scheduler.json" > }, > "before": { > "path": "/etc/origin/master/scheduler.json" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": "True", > "content": null, > "delimiter": null, > "dest": "/etc/origin/master/scheduler.json", > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": null, > "original_basename": "tmpPyVAXv", > "owner": null, > "path": "/etc/origin/master/scheduler.json", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "tmpPyVAXv", > "state": "file", > "unsafe_writes": null, > "validate": null > } > }, > "mode": "0644", > "owner": "root", > "path": "/etc/origin/master/scheduler.json", > "secontext": "system_u:object_r:etc_t:s0", > "size": 1923, > "state": "file", > "uid": 0 >} >2018-06-12 17:07:57,897 p=5860 u=root | TASK [openshift_control_plane : Install httpd-tools if needed] ****************************************************************************************************************************************************************************** >2018-06-12 17:07:57,897 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/htpass_provider.yml:2 >2018-06-12 17:07:57,918 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={'challenge': u'true', 'login': u'true', 'kind': u'AllowAllPasswordIdentityProvider', 'name': u'allow_all'}) => { > "changed": false, > "item": { > "challenge": "true", > "kind": "AllowAllPasswordIdentityProvider", > "login": "true", > "name": "allow_all" > }, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:57,927 p=5860 u=root | TASK [openshift_control_plane : Create the htpasswd file if needed] ************************************************************************************************************************************************************************* >2018-06-12 17:07:57,927 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/htpass_provider.yml:11 >2018-06-12 17:07:57,947 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={'challenge': u'true', 'login': u'true', 'kind': u'AllowAllPasswordIdentityProvider', 'name': u'allow_all'}) => { > "changed": false, > "item": { > "challenge": "true", > "kind": "AllowAllPasswordIdentityProvider", > "login": "true", > "name": "allow_all" > }, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:57,956 p=5860 u=root | TASK [openshift_control_plane : Ensure htpasswd file exists] ******************************************************************************************************************************************************************************** >2018-06-12 17:07:57,956 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/htpass_provider.yml:22 >2018-06-12 17:07:57,976 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={'challenge': u'true', 'login': u'true', 'kind': u'AllowAllPasswordIdentityProvider', 'name': u'allow_all'}) => { > "changed": false, > "item": { > "challenge": "true", > "kind": "AllowAllPasswordIdentityProvider", > "login": "true", > "name": "allow_all" > }, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:57,985 p=5860 u=root | TASK [openshift_control_plane : Create the ldap ca file if needed] ************************************************************************************************************************************************************************** >2018-06-12 17:07:57,985 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:63 >2018-06-12 17:07:58,004 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={'challenge': u'true', 'login': u'true', 'kind': u'AllowAllPasswordIdentityProvider', 'name': u'allow_all'}) => { > "changed": false, > "item": { > "challenge": "true", > "kind": "AllowAllPasswordIdentityProvider", > "login": "true", > "name": "allow_all" > }, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:58,013 p=5860 u=root | TASK [openshift_control_plane : Create the openid ca file if needed] ************************************************************************************************************************************************************************ >2018-06-12 17:07:58,013 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:74 >2018-06-12 17:07:58,032 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={'challenge': u'true', 'login': u'true', 'kind': u'AllowAllPasswordIdentityProvider', 'name': u'allow_all'}) => { > "changed": false, > "item": { > "challenge": "true", > "kind": "AllowAllPasswordIdentityProvider", > "login": "true", > "name": "allow_all" > }, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:58,040 p=5860 u=root | TASK [openshift_control_plane : Create the request header ca file if needed] **************************************************************************************************************************************************************** >2018-06-12 17:07:58,040 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:86 >2018-06-12 17:07:58,060 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item={'challenge': u'true', 'login': u'true', 'kind': u'AllowAllPasswordIdentityProvider', 'name': u'allow_all'}) => { > "changed": false, > "item": { > "challenge": "true", > "kind": "AllowAllPasswordIdentityProvider", > "login": "true", > "name": "allow_all" > }, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:07:58,069 p=5860 u=root | TASK [openshift_control_plane : Set fact of all etcd host IPs] ****************************************************************************************************************************************************************************** >2018-06-12 17:07:58,069 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:98 >2018-06-12 17:07:58,099 p=5860 u=root | Using module file /root/openshift-ansible/roles/openshift_facts/library/openshift_facts.py >2018-06-12 17:07:58,638 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "openshift": { > "builddefaults": { > "config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > } > } > }, > "buildoverrides": { > "config": { > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > } > } > }, > "cloudprovider": { > "kind": "aws" > }, > "common": { > "all_hostnames": [ > "kubernetes.default", > "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "54.186.168.249", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "config_base": "/etc/origin", > "dns_domain": "cluster.local", > "generate_no_proxy_hosts": true, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "internal_hostnames": [ > "kubernetes.default", > "kubernetes.default.svc.cluster.local", > "kubernetes", > "openshift.default", > "172.31.50.118", > "openshift.default.svc", > "openshift.default.svc.cluster.local", > "ip-172-31-50-118.us-west-2.compute.internal", > "kubernetes.default.svc", > "openshift", > "172.24.0.1" > ], > "ip": "172.31.50.118", > "kube_svc_ip": "172.24.0.1", > "no_proxy_etcd_host_ips": "172.31.50.118", > "portal_net": "172.24.0.0/14", > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249", > "raw_hostname": "ip-172-31-50-118.us-west-2.compute.internal" > }, > "current_config": { > "roles": [ > "node", > "builddefaults", > "cloudprovider", > "master", > "buildoverrides" > ] > }, > "master": { > "admission_plugin_config": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > }, > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > }, > "openshift.io/ImagePolicy": { > "configuration": { > "apiVersion": "v1", > "executionRules": [ > { > "matchImageAnnotations": [ > { > "key": "images.openshift.io/deny-execution", > "value": "true" > } > ], > "name": "execution-denied", > "onResources": [ > { > "resource": "pods" > }, > { > "resource": "builds" > } > ], > "reject": true, > "skipOnResolutionFailure": true > } > ], > "kind": "ImagePolicyConfig" > } > } > }, > "api_port": "8443", > "api_server_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ] > }, > "api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "api_use_ssl": true, > "bind_addr": "0.0.0.0", > "console_path": "/console", > "console_port": "8443", > "console_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443/console", > "console_use_ssl": true, > "controller_args": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ], > "disable-attach-detach-reconcile-sync": [ > "true" > ] > }, > "controllers_port": "8444", > "loopback_api_url": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "loopback_cluster_name": "ip-172-31-50-118-us-west-2-compute-internal:8443", > "loopback_context_name": "default/ip-172-31-50-118-us-west-2-compute-internal:8443/system:openshift-master", > "loopback_user": "system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443", > "named_certificates": [], > "portal_net": "172.30.0.0/16", > "public_api_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443", > "public_console_url": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443/console", > "registry_selector": "node-role.kubernetes.io/infra=true", > "registry_url": "registry.reg-aws.openshift.com:443/openshift3/ose-${component}:${version}", > "session_max_seconds": 3600, > "session_name": "ssn" > }, > "node": { > "bootstrapped": true, > "nodename": "ip-172-31-50-118.us-west-2.compute.internal", > "sdn_mtu": "8951" > }, > "provider": { > "metadata": { > "ami-id": "ami-f1064589", > "ami-launch-index": "0", > "ami-manifest-path": "(unknown)", > "block-device-mapping": { > "ami": "sda1", > "ebs1": "sdb", > "root": "/dev/sda1" > }, > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "instance-action": "none", > "instance-id": "i-0936de393175df6ba", > "instance-type": "m5.xlarge", > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "metrics": { > "vhostmd": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" > }, > "network": { > "interfaces": { > "macs": { > "02:42:24:b2:8d:1a": { > "device-number": "0", > "interface-id": "eni-e0ac6b0a", > "ipv4-associations": { > "54.186.168.249": "172.31.50.118" > }, > "local-hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "local-ipv4s": "172.31.50.118", > "mac": "02:42:24:b2:8d:1a", > "owner-id": "925374498059", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4s": "54.186.168.249", > "security-group-ids": "sg-5c5ace38", > "security-groups": "default", > "subnet-id": "subnet-4879292d", > "subnet-ipv4-cidr-block": "172.31.0.0/18", > "vpc-id": "vpc-33b5f656", > "vpc-ipv4-cidr-block": "172.31.0.0/16", > "vpc-ipv4-cidr-blocks": "172.31.0.0/16" > } > } > } > }, > "placement": { > "availability-zone": "us-west-2b" > }, > "profile": "hvm", > "public-hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public-ipv4": "54.186.168.249", > "public-keys/": "0=libra", > "reservation-id": "r-09891556570a1d8a4", > "security-groups": "default", > "services": { > "domain": "amazonaws.com", > "partition": "aws" > } > }, > "name": "aws", > "network": { > "hostname": "ip-172-31-50-118.us-west-2.compute.internal", > "interfaces": [ > { > "ips": [ > "172.31.50.118" > ], > "network_id": "subnet-4879292d", > "network_type": "vpc", > "public_ips": [ > "54.186.168.249" > ] > } > ], > "ip": "172.31.50.118", > "ipv6_enabled": false, > "public_hostname": "ec2-54-186-168-249.us-west-2.compute.amazonaws.com", > "public_ip": "54.186.168.249" > }, > "zone": "us-west-2b" > } > } > }, > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "additive_facts_to_overwrite": [], > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "directory_mode": null, > "filter": "*", > "follow": false, > "force": null, > "gather_subset": [ > "hardware", > "network", > "virtual", > "facter" > ], > "gather_timeout": 10, > "group": null, > "local_facts": { > "no_proxy_etcd_host_ips": "172.31.50.118" > }, > "mode": null, > "owner": null, > "regexp": null, > "remote_src": null, > "role": "common", > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "unsafe_writes": null > } > } >} >2018-06-12 17:07:58,652 p=5860 u=root | TASK [openshift_control_plane : Create session secrets file] ******************************************************************************************************************************************************************************** >2018-06-12 17:07:58,652 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:104 >2018-06-12 17:07:58,746 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:58,914 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:07:59,080 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "checksum": "606b2ebe2d092bc6f4defab5e704301ba82611cf", > "dest": "/etc/origin/master/session-secrets.yaml", > "diff": { > "after": { > "path": "/etc/origin/master/session-secrets.yaml" > }, > "before": { > "path": "/etc/origin/master/session-secrets.yaml" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "dest": "/etc/origin/master/session-secrets.yaml", > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": "root", > "mode": 384, > "original_basename": "sessionSecretsFile.yaml.v1.j2", > "owner": "root", > "path": "/etc/origin/master/session-secrets.yaml", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "sessionSecretsFile.yaml.v1.j2", > "state": "file", > "unsafe_writes": null, > "validate": null > } > }, > "mode": "0600", > "owner": "root", > "path": "/etc/origin/master/session-secrets.yaml", > "secontext": "system_u:object_r:etc_t:s0", > "size": 147, > "state": "file", > "uid": 0 >} >2018-06-12 17:07:59,089 p=5860 u=root | TASK [openshift_control_plane : set_fact] *************************************************************************************************************************************************************************************************** >2018-06-12 17:07:59,089 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:112 >2018-06-12 17:07:59,121 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "ansible_facts": { > "translated_identity_providers": "- challenge: true\n login: true\n mappingMethod: claim\n name: allow_all\n provider:\n apiVersion: v1\n kind: AllowAllPasswordIdentityProvider\n" > }, > "changed": false, > "failed": false >} >2018-06-12 17:07:59,130 p=5860 u=root | TASK [openshift_control_plane : Create master config] *************************************************************************************************************************************************************************************** >2018-06-12 17:07:59,131 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:117 >2018-06-12 17:07:59,292 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:07:59,511 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py >2018-06-12 17:07:59,682 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "backup_file": "/etc/origin/master/master-config.yaml.19772.2018-06-12@17:07:59~", > "changed": true, > "checksum": "0acf9bf4b9870e9a12f79a2affe031a92cf4100a", > "dest": "/etc/origin/master/master-config.yaml", > "diff": [], > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": true, > "content": null, > "delimiter": null, > "dest": "/etc/origin/master/master-config.yaml", > "directory_mode": null, > "follow": false, > "force": true, > "group": "root", > "local_follow": null, > "mode": 384, > "original_basename": "master.yaml.v1.j2", > "owner": "root", > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "/root/.ansible/tmp/ansible-tmp-1528823279.25-161334302595966/source", > "unsafe_writes": null, > "validate": null > } > }, > "md5sum": "7c28e76470d4c202e5c9fb7fb3b72b2e", > "mode": "0600", > "owner": "root", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 6117, > "src": "/root/.ansible/tmp/ansible-tmp-1528823279.25-161334302595966/source", > "state": "file", > "uid": 0 >} >2018-06-12 17:07:59,690 p=5860 u=root | TASK [openshift_control_plane : Test local loopback context] ******************************************************************************************************************************************************************************** >2018-06-12 17:07:59,691 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/set_loopback_context.yml:2 >2018-06-12 17:07:59,721 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:08:00,022 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "cmd": [ > "oc", > "config", > "view", > "--config=/etc/origin/master/openshift-master.kubeconfig" > ], > "delta": "0:00:00.090553", > "end": "2018-06-12 17:08:00.004327", > "failed": false, > "invocation": { > "module_args": { > "_raw_params": "oc config view --config=/etc/origin/master/openshift-master.kubeconfig", > "_uses_shell": false, > "chdir": null, > "creates": null, > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "rc": 0, > "start": "2018-06-12 17:07:59.913774", > "stderr": "", > "stderr_lines": [], > "stdout": "apiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: REDACTED\n server: https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443\n name: ec2-54-186-168-249-us-west-2-compute-amazonaws-com:8443\n- cluster:\n certificate-authority-data: REDACTED\n server: https://ip-172-31-50-118.us-west-2.compute.internal:8443\n name: ip-172-31-50-118-us-west-2-compute-internal:8443\ncontexts:\n- context:\n cluster: ec2-54-186-168-249-us-west-2-compute-amazonaws-com:8443\n namespace: default\n user: system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443\n name: default/ec2-54-186-168-249-us-west-2-compute-amazonaws-com:8443/system:openshift-master\n- context:\n cluster: ip-172-31-50-118-us-west-2-compute-internal:8443\n namespace: default\n user: system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443\n name: default/ip-172-31-50-118-us-west-2-compute-internal:8443/system:openshift-master\ncurrent-context: default/ip-172-31-50-118-us-west-2-compute-internal:8443/system:openshift-master\nkind: Config\npreferences: {}\nusers:\n- name: system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443\n user:\n client-certificate-data: REDACTED\n client-key-data: REDACTED", > "stdout_lines": [ > "apiVersion: v1", > "clusters:", > "- cluster:", > " certificate-authority-data: REDACTED", > " server: https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443", > " name: ec2-54-186-168-249-us-west-2-compute-amazonaws-com:8443", > "- cluster:", > " certificate-authority-data: REDACTED", > " server: https://ip-172-31-50-118.us-west-2.compute.internal:8443", > " name: ip-172-31-50-118-us-west-2-compute-internal:8443", > "contexts:", > "- context:", > " cluster: ec2-54-186-168-249-us-west-2-compute-amazonaws-com:8443", > " namespace: default", > " user: system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443", > " name: default/ec2-54-186-168-249-us-west-2-compute-amazonaws-com:8443/system:openshift-master", > "- context:", > " cluster: ip-172-31-50-118-us-west-2-compute-internal:8443", > " namespace: default", > " user: system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443", > " name: default/ip-172-31-50-118-us-west-2-compute-internal:8443/system:openshift-master", > "current-context: default/ip-172-31-50-118-us-west-2-compute-internal:8443/system:openshift-master", > "kind: Config", > "preferences: {}", > "users:", > "- name: system:openshift-master/ip-172-31-50-118-us-west-2-compute-internal:8443", > " user:", > " client-certificate-data: REDACTED", > " client-key-data: REDACTED" > ] >} >2018-06-12 17:08:00,032 p=5860 u=root | TASK [openshift_control_plane : command] **************************************************************************************************************************************************************************************************** >2018-06-12 17:08:00,032 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/set_loopback_context.yml:9 >2018-06-12 17:08:00,049 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:08:00,057 p=5860 u=root | TASK [openshift_control_plane : command] **************************************************************************************************************************************************************************************************** >2018-06-12 17:08:00,057 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/set_loopback_context.yml:19 >2018-06-12 17:08:00,071 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:08:00,080 p=5860 u=root | TASK [openshift_control_plane : command] **************************************************************************************************************************************************************************************************** >2018-06-12 17:08:00,080 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/set_loopback_context.yml:29 >2018-06-12 17:08:00,094 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:08:00,103 p=5860 u=root | TASK [openshift_control_plane : Create the master service env file] ************************************************************************************************************************************************************************* >2018-06-12 17:08:00,103 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:128 >2018-06-12 17:08:00,198 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:08:00,365 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:08:00,532 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "checksum": "cac0e2828f7d71a39399557666653de81fbf2b12", > "dest": "/etc/origin/master/master.env", > "diff": { > "after": { > "path": "/etc/origin/master/master.env" > }, > "before": { > "path": "/etc/origin/master/master.env" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": "True", > "content": null, > "delimiter": null, > "dest": "/etc/origin/master/master.env", > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": null, > "original_basename": "master.env.j2", > "owner": null, > "path": "/etc/origin/master/master.env", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "master.env.j2", > "state": "file", > "unsafe_writes": null, > "validate": null > } > }, > "mode": "0644", > "owner": "root", > "path": "/etc/origin/master/master.env", > "secontext": "system_u:object_r:etc_t:s0", > "size": 264, > "state": "file", > "uid": 0 >} >2018-06-12 17:08:00,541 p=5860 u=root | TASK [openshift_control_plane : Enable bootstrapping in the master config] ****************************************************************************************************************************************************************** >2018-06-12 17:08:00,541 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/static.yml:2 >2018-06-12 17:08:00,575 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_utils/library/yedit.py >2018-06-12 17:08:00,886 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": true, > "failed": false, > "invocation": { > "module_args": { > "append": false, > "backup": false, > "backup_ext": ".20180612T170800", > "content": null, > "content_type": "yaml", > "curr_value": null, > "curr_value_format": "yaml", > "debug": false, > "edits": [ > { > "key": "kubernetesMasterConfig.controllerArguments.cluster-signing-cert-file", > "value": [ > "/etc/origin/master/ca.crt" > ] > }, > { > "key": "kubernetesMasterConfig.controllerArguments.cluster-signing-key-file", > "value": [ > "/etc/origin/master/ca.key" > ] > } > ], > "index": null, > "key": "", > "separator": ".", > "src": "/etc/origin/master/master-config.yaml", > "state": "present", > "update": false, > "value": null, > "value_type": "" > } > }, > "result": [ > { > "edit": { > "admissionConfig": { > "pluginConfig": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > }, > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > }, > "openshift.io/ImagePolicy": { > "configuration": { > "apiVersion": "v1", > "executionRules": [ > { > "matchImageAnnotations": [ > { > "key": "images.openshift.io/deny-execution", > "value": "true" > } > ], > "name": "execution-denied", > "onResources": [ > { > "resource": "pods" > }, > { > "resource": "builds" > } > ], > "reject": true, > "skipOnResolutionFailure": true > } > ], > "kind": "ImagePolicyConfig" > } > } > } > }, > "aggregatorConfig": { > "proxyClientInfo": { > "certFile": "aggregator-front-proxy.crt", > "keyFile": "aggregator-front-proxy.key" > } > }, > "apiLevels": [ > "v1" > ], > "apiVersion": "v1", > "authConfig": { > "requestHeader": { > "clientCA": "front-proxy-ca.crt", > "clientCommonNames": [ > "aggregator-front-proxy" > ], > "extraHeaderPrefixes": [ > "X-Remote-Extra-" > ], > "groupHeaders": [ > "X-Remote-Group" > ], > "usernameHeaders": [ > "X-Remote-User" > ] > } > }, > "controllerConfig": { > "election": { > "lockName": "openshift-master-controllers" > }, > "serviceServingCert": { > "signer": { > "certFile": "service-signer.crt", > "keyFile": "service-signer.key" > } > } > }, > "controllers": "*", > "corsAllowedOrigins": [ > "(?i)//127\\.0\\.0\\.1(:|\\z)", > "(?i)//localhost(:|\\z)", > "(?i)//172\\.31\\.50\\.118(:|\\z)", > "(?i)//54\\.186\\.168\\.249(:|\\z)", > "(?i)//kubernetes\\.default(:|\\z)", > "(?i)//ec2\\-54\\-186\\-168\\-249\\.us\\-west\\-2\\.compute\\.amazonaws\\.com(:|\\z)", > "(?i)//kubernetes\\.default\\.svc\\.cluster\\.local(:|\\z)", > "(?i)//kubernetes(:|\\z)", > "(?i)//openshift\\.default(:|\\z)", > "(?i)//openshift\\.default\\.svc(:|\\z)", > "(?i)//openshift\\.default\\.svc\\.cluster\\.local(:|\\z)", > "(?i)//ip\\-172\\-31\\-50\\-118\\.us\\-west\\-2\\.compute\\.internal(:|\\z)", > "(?i)//kubernetes\\.default\\.svc(:|\\z)", > "(?i)//openshift(:|\\z)", > "(?i)//172\\.24\\.0\\.1(:|\\z)" > ], > "dnsConfig": { > "bindAddress": "0.0.0.0:8053", > "bindNetwork": "tcp4" > }, > "etcdClientInfo": { > "ca": "master.etcd-ca.crt", > "certFile": "master.etcd-client.crt", > "keyFile": "master.etcd-client.key", > "urls": [ > "https://ip-172-31-50-118.us-west-2.compute.internal:2379" > ] > }, > "etcdStorageConfig": { > "kubernetesStoragePrefix": "kubernetes.io", > "kubernetesStorageVersion": "v1", > "openShiftStoragePrefix": "openshift.io", > "openShiftStorageVersion": "v1" > }, > "imageConfig": { > "format": "registry.reg-aws.openshift.com:443/openshift3/ose-${component}:${version}", > "latest": false > }, > "imagePolicyConfig": { > "internalRegistryHostname": "docker-registry.default.svc:5000" > }, > "kind": "MasterConfig", > "kubeletClientInfo": { > "ca": "ca-bundle.crt", > "certFile": "master.kubelet-client.crt", > "keyFile": "master.kubelet-client.key", > "port": 10250 > }, > "kubernetesMasterConfig": { > "apiServerArguments": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ], > "storage-backend": [ > "etcd3" > ], > "storage-media-type": [ > "application/vnd.kubernetes.protobuf" > ] > }, > "controllerArguments": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ], > "cluster-signing-cert-file": [ > "/etc/origin/master/ca.crt" > ], > "disable-attach-detach-reconcile-sync": [ > "true" > ] > }, > "masterCount": 1, > "masterIP": "172.31.50.118", > "podEvictionTimeout": null, > "proxyClientInfo": { > "certFile": "master.proxy-client.crt", > "keyFile": "master.proxy-client.key" > }, > "schedulerArguments": null, > "schedulerConfigFile": "/etc/origin/master/scheduler.json", > "servicesNodePortRange": "", > "servicesSubnet": "172.24.0.0/14", > "staticNodeNames": [] > }, > "masterClients": { > "externalKubernetesClientConnectionOverrides": { > "acceptContentTypes": "application/vnd.kubernetes.protobuf,application/json", > "burst": 400, > "contentType": "application/vnd.kubernetes.protobuf", > "qps": 200 > }, > "externalKubernetesKubeConfig": "", > "openshiftLoopbackClientConnectionOverrides": { > "acceptContentTypes": "application/vnd.kubernetes.protobuf,application/json", > "burst": 600, > "contentType": "application/vnd.kubernetes.protobuf", > "qps": 300 > }, > "openshiftLoopbackKubeConfig": "openshift-master.kubeconfig" > }, > "masterPublicURL": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443", > "networkConfig": { > "clusterNetworks": [ > { > "cidr": "172.20.0.0/14", > "hostSubnetLength": 9 > } > ], > "externalIPNetworkCIDRs": [ > "0.0.0.0/0" > ], > "networkPluginName": "redhat/openshift-ovs-networkpolicy", > "serviceNetworkCIDR": "172.24.0.0/14" > }, > "oauthConfig": { > "assetPublicURL": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443/console/", > "grantConfig": { > "method": "auto" > }, > "identityProviders": [ > { > "challenge": true, > "login": true, > "mappingMethod": "claim", > "name": "allow_all", > "provider": { > "apiVersion": "v1", > "kind": "AllowAllPasswordIdentityProvider" > } > } > ], > "masterCA": "ca-bundle.crt", > "masterPublicURL": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443", > "masterURL": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "sessionConfig": { > "sessionMaxAgeSeconds": 3600, > "sessionName": "ssn", > "sessionSecretsFile": "/etc/origin/master/session-secrets.yaml" > }, > "tokenConfig": { > "accessTokenMaxAgeSeconds": 86400, > "authorizeTokenMaxAgeSeconds": 500 > } > }, > "pauseControllers": false, > "policyConfig": { > "bootstrapPolicyFile": "/etc/origin/master/policy.json", > "openshiftInfrastructureNamespace": "openshift-infra", > "openshiftSharedResourcesNamespace": "openshift" > }, > "projectConfig": { > "defaultNodeSelector": "node-role.kubernetes.io/compute=true", > "projectRequestMessage": "", > "projectRequestTemplate": "", > "securityAllocator": { > "mcsAllocatorRange": "s0:/2", > "mcsLabelsPerProject": 5, > "uidAllocatorRange": "1000000000-1999999999/10000" > } > }, > "routingConfig": { > "subdomain": "apps.0612-g-9.qe.rhcloud.com" > }, > "serviceAccountConfig": { > "limitSecretReferences": false, > "managedNames": [ > "default", > "builder", > "deployer" > ], > "masterCA": "ca-bundle.crt", > "privateKeyFile": "serviceaccounts.private.key", > "publicKeyFiles": [ > "serviceaccounts.public.key" > ] > }, > "servingInfo": { > "bindAddress": "0.0.0.0:8443", > "bindNetwork": "tcp4", > "certFile": "master.server.crt", > "clientCA": "ca.crt", > "keyFile": "master.server.key", > "maxRequestsInFlight": 500, > "requestTimeoutSeconds": 3600 > }, > "volumeConfig": { > "dynamicProvisioningEnabled": true > } > }, > "key": "kubernetesMasterConfig.controllerArguments.cluster-signing-cert-file" > }, > { > "edit": { > "admissionConfig": { > "pluginConfig": { > "BuildDefaults": { > "configuration": { > "apiVersion": "v1", > "env": [], > "kind": "BuildDefaultsConfig", > "resources": { > "limits": {}, > "requests": {} > } > } > }, > "BuildOverrides": { > "configuration": { > "apiVersion": "v1", > "kind": "BuildOverridesConfig" > } > }, > "openshift.io/ImagePolicy": { > "configuration": { > "apiVersion": "v1", > "executionRules": [ > { > "matchImageAnnotations": [ > { > "key": "images.openshift.io/deny-execution", > "value": "true" > } > ], > "name": "execution-denied", > "onResources": [ > { > "resource": "pods" > }, > { > "resource": "builds" > } > ], > "reject": true, > "skipOnResolutionFailure": true > } > ], > "kind": "ImagePolicyConfig" > } > } > } > }, > "aggregatorConfig": { > "proxyClientInfo": { > "certFile": "aggregator-front-proxy.crt", > "keyFile": "aggregator-front-proxy.key" > } > }, > "apiLevels": [ > "v1" > ], > "apiVersion": "v1", > "authConfig": { > "requestHeader": { > "clientCA": "front-proxy-ca.crt", > "clientCommonNames": [ > "aggregator-front-proxy" > ], > "extraHeaderPrefixes": [ > "X-Remote-Extra-" > ], > "groupHeaders": [ > "X-Remote-Group" > ], > "usernameHeaders": [ > "X-Remote-User" > ] > } > }, > "controllerConfig": { > "election": { > "lockName": "openshift-master-controllers" > }, > "serviceServingCert": { > "signer": { > "certFile": "service-signer.crt", > "keyFile": "service-signer.key" > } > } > }, > "controllers": "*", > "corsAllowedOrigins": [ > "(?i)//127\\.0\\.0\\.1(:|\\z)", > "(?i)//localhost(:|\\z)", > "(?i)//172\\.31\\.50\\.118(:|\\z)", > "(?i)//54\\.186\\.168\\.249(:|\\z)", > "(?i)//kubernetes\\.default(:|\\z)", > "(?i)//ec2\\-54\\-186\\-168\\-249\\.us\\-west\\-2\\.compute\\.amazonaws\\.com(:|\\z)", > "(?i)//kubernetes\\.default\\.svc\\.cluster\\.local(:|\\z)", > "(?i)//kubernetes(:|\\z)", > "(?i)//openshift\\.default(:|\\z)", > "(?i)//openshift\\.default\\.svc(:|\\z)", > "(?i)//openshift\\.default\\.svc\\.cluster\\.local(:|\\z)", > "(?i)//ip\\-172\\-31\\-50\\-118\\.us\\-west\\-2\\.compute\\.internal(:|\\z)", > "(?i)//kubernetes\\.default\\.svc(:|\\z)", > "(?i)//openshift(:|\\z)", > "(?i)//172\\.24\\.0\\.1(:|\\z)" > ], > "dnsConfig": { > "bindAddress": "0.0.0.0:8053", > "bindNetwork": "tcp4" > }, > "etcdClientInfo": { > "ca": "master.etcd-ca.crt", > "certFile": "master.etcd-client.crt", > "keyFile": "master.etcd-client.key", > "urls": [ > "https://ip-172-31-50-118.us-west-2.compute.internal:2379" > ] > }, > "etcdStorageConfig": { > "kubernetesStoragePrefix": "kubernetes.io", > "kubernetesStorageVersion": "v1", > "openShiftStoragePrefix": "openshift.io", > "openShiftStorageVersion": "v1" > }, > "imageConfig": { > "format": "registry.reg-aws.openshift.com:443/openshift3/ose-${component}:${version}", > "latest": false > }, > "imagePolicyConfig": { > "internalRegistryHostname": "docker-registry.default.svc:5000" > }, > "kind": "MasterConfig", > "kubeletClientInfo": { > "ca": "ca-bundle.crt", > "certFile": "master.kubelet-client.crt", > "keyFile": "master.kubelet-client.key", > "port": 10250 > }, > "kubernetesMasterConfig": { > "apiServerArguments": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ], > "storage-backend": [ > "etcd3" > ], > "storage-media-type": [ > "application/vnd.kubernetes.protobuf" > ] > }, > "controllerArguments": { > "cloud-config": [ > "/etc/origin/cloudprovider/aws.conf" > ], > "cloud-provider": [ > "aws" > ], > "cluster-signing-cert-file": [ > "/etc/origin/master/ca.crt" > ], > "cluster-signing-key-file": [ > "/etc/origin/master/ca.key" > ], > "disable-attach-detach-reconcile-sync": [ > "true" > ] > }, > "masterCount": 1, > "masterIP": "172.31.50.118", > "podEvictionTimeout": null, > "proxyClientInfo": { > "certFile": "master.proxy-client.crt", > "keyFile": "master.proxy-client.key" > }, > "schedulerArguments": null, > "schedulerConfigFile": "/etc/origin/master/scheduler.json", > "servicesNodePortRange": "", > "servicesSubnet": "172.24.0.0/14", > "staticNodeNames": [] > }, > "masterClients": { > "externalKubernetesClientConnectionOverrides": { > "acceptContentTypes": "application/vnd.kubernetes.protobuf,application/json", > "burst": 400, > "contentType": "application/vnd.kubernetes.protobuf", > "qps": 200 > }, > "externalKubernetesKubeConfig": "", > "openshiftLoopbackClientConnectionOverrides": { > "acceptContentTypes": "application/vnd.kubernetes.protobuf,application/json", > "burst": 600, > "contentType": "application/vnd.kubernetes.protobuf", > "qps": 300 > }, > "openshiftLoopbackKubeConfig": "openshift-master.kubeconfig" > }, > "masterPublicURL": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443", > "networkConfig": { > "clusterNetworks": [ > { > "cidr": "172.20.0.0/14", > "hostSubnetLength": 9 > } > ], > "externalIPNetworkCIDRs": [ > "0.0.0.0/0" > ], > "networkPluginName": "redhat/openshift-ovs-networkpolicy", > "serviceNetworkCIDR": "172.24.0.0/14" > }, > "oauthConfig": { > "assetPublicURL": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443/console/", > "grantConfig": { > "method": "auto" > }, > "identityProviders": [ > { > "challenge": true, > "login": true, > "mappingMethod": "claim", > "name": "allow_all", > "provider": { > "apiVersion": "v1", > "kind": "AllowAllPasswordIdentityProvider" > } > } > ], > "masterCA": "ca-bundle.crt", > "masterPublicURL": "https://ec2-54-186-168-249.us-west-2.compute.amazonaws.com:8443", > "masterURL": "https://ip-172-31-50-118.us-west-2.compute.internal:8443", > "sessionConfig": { > "sessionMaxAgeSeconds": 3600, > "sessionName": "ssn", > "sessionSecretsFile": "/etc/origin/master/session-secrets.yaml" > }, > "tokenConfig": { > "accessTokenMaxAgeSeconds": 86400, > "authorizeTokenMaxAgeSeconds": 500 > } > }, > "pauseControllers": false, > "policyConfig": { > "bootstrapPolicyFile": "/etc/origin/master/policy.json", > "openshiftInfrastructureNamespace": "openshift-infra", > "openshiftSharedResourcesNamespace": "openshift" > }, > "projectConfig": { > "defaultNodeSelector": "node-role.kubernetes.io/compute=true", > "projectRequestMessage": "", > "projectRequestTemplate": "", > "securityAllocator": { > "mcsAllocatorRange": "s0:/2", > "mcsLabelsPerProject": 5, > "uidAllocatorRange": "1000000000-1999999999/10000" > } > }, > "routingConfig": { > "subdomain": "apps.0612-g-9.qe.rhcloud.com" > }, > "serviceAccountConfig": { > "limitSecretReferences": false, > "managedNames": [ > "default", > "builder", > "deployer" > ], > "masterCA": "ca-bundle.crt", > "privateKeyFile": "serviceaccounts.private.key", > "publicKeyFiles": [ > "serviceaccounts.public.key" > ] > }, > "servingInfo": { > "bindAddress": "0.0.0.0:8443", > "bindNetwork": "tcp4", > "certFile": "master.server.crt", > "clientCA": "ca.crt", > "keyFile": "master.server.key", > "maxRequestsInFlight": 500, > "requestTimeoutSeconds": 3600 > }, > "volumeConfig": { > "dynamicProvisioningEnabled": true > } > }, > "key": "kubernetesMasterConfig.controllerArguments.cluster-signing-key-file" > } > ], > "state": "present" >} >2018-06-12 17:08:00,897 p=5860 u=root | TASK [openshift_control_plane : Create temp directory for static pods] ********************************************************************************************************************************************************************** >2018-06-12 17:08:00,898 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/static.yml:15 >2018-06-12 17:08:00,926 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:08:01,132 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "cmd": [ > "mktemp", > "-d", > "/tmp/openshift-ansible-XXXXXX" > ], > "delta": "0:00:00.002367", > "end": "2018-06-12 17:08:01.118156", > "failed": false, > "invocation": { > "module_args": { > "_raw_params": "mktemp -d /tmp/openshift-ansible-XXXXXX", > "_uses_shell": false, > "chdir": null, > "creates": null, > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "rc": 0, > "start": "2018-06-12 17:08:01.115789", > "stderr": "", > "stderr_lines": [], > "stdout": "/tmp/openshift-ansible-wQT6ZQ", > "stdout_lines": [ > "/tmp/openshift-ansible-wQT6ZQ" > ] >} >2018-06-12 17:08:01,142 p=5860 u=root | TASK [openshift_control_plane : Prepare master static pods] ********************************************************************************************************************************************************************************* >2018-06-12 17:08:01,142 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/static.yml:20 >2018-06-12 17:08:01,220 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:08:01,389 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:08:01,600 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py >2018-06-12 17:08:01,770 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=apiserver.yaml) => { > "changed": true, > "checksum": "595b0dc160e75156dc0b71a2ed1c92bd4caa904e", > "dest": "/tmp/openshift-ansible-wQT6ZQ/apiserver.yaml", > "diff": [], > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": false, > "content": null, > "delimiter": null, > "dest": "/tmp/openshift-ansible-wQT6ZQ/apiserver.yaml", > "directory_mode": null, > "follow": false, > "force": true, > "group": null, > "local_follow": null, > "mode": 384, > "original_basename": "apiserver.yaml", > "owner": null, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "/root/.ansible/tmp/ansible-tmp-1528823281.18-220323956556188/source", > "unsafe_writes": null, > "validate": null > } > }, > "item": "apiserver.yaml", > "md5sum": "914e1a6e4b813177622e9a9f77129c77", > "mode": "0600", > "owner": "root", > "secontext": "unconfined_u:object_r:admin_home_t:s0", > "size": 1490, > "src": "/root/.ansible/tmp/ansible-tmp-1528823281.18-220323956556188/source", > "state": "file", > "uid": 0 >} >2018-06-12 17:08:01,833 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:08:01,997 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/stat.py >2018-06-12 17:08:02,198 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py >2018-06-12 17:08:02,359 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=controller.yaml) => { > "changed": true, > "checksum": "b375caf49d06dc0fc7a9605910cc80b32ca95b66", > "dest": "/tmp/openshift-ansible-wQT6ZQ/controller.yaml", > "diff": [], > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": false, > "content": null, > "delimiter": null, > "dest": "/tmp/openshift-ansible-wQT6ZQ/controller.yaml", > "directory_mode": null, > "follow": false, > "force": true, > "group": null, > "local_follow": null, > "mode": 384, > "original_basename": "controller.yaml", > "owner": null, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "/root/.ansible/tmp/ansible-tmp-1528823281.79-280851555288865/source", > "unsafe_writes": null, > "validate": null > } > }, > "item": "controller.yaml", > "md5sum": "07bd08c4acb917938f9175bd6334fb0f", > "mode": "0600", > "owner": "root", > "secontext": "unconfined_u:object_r:admin_home_t:s0", > "size": 1305, > "src": "/root/.ansible/tmp/ansible-tmp-1528823281.79-280851555288865/source", > "state": "file", > "uid": 0 >} >2018-06-12 17:08:02,369 p=5860 u=root | TASK [openshift_control_plane : Update master static pods] ********************************************************************************************************************************************************************************** >2018-06-12 17:08:02,369 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/static.yml:29 >2018-06-12 17:08:02,406 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_utils/library/yedit.py >2018-06-12 17:08:02,661 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=apiserver.yaml) => { > "changed": true, > "failed": false, > "invocation": { > "module_args": { > "append": false, > "backup": false, > "backup_ext": ".20180612T170802", > "content": null, > "content_type": "yaml", > "curr_value": null, > "curr_value_format": "yaml", > "debug": false, > "edits": [ > { > "key": "spec.containers[0].image", > "value": "registry.reg-aws.openshift.com:443/openshift3/ose-control-plane:v3.10.0" > } > ], > "index": null, > "key": "", > "separator": ".", > "src": "/tmp/openshift-ansible-wQT6ZQ/apiserver.yaml", > "state": "present", > "update": false, > "value": null, > "value_type": "" > } > }, > "item": "apiserver.yaml", > "result": [ > { > "edit": { > "apiVersion": "v1", > "kind": "Pod", > "metadata": { > "annotations": { > "scheduler.alpha.kubernetes.io/critical-pod": "" > }, > "labels": { > "openshift.io/component": "api", > "openshift.io/control-plane": "true" > }, > "name": "master-api", > "namespace": "kube-system" > }, > "spec": { > "containers": [ > { > "args": [ > "#!/bin/bash\nset -euo pipefail\nif [[ -f /etc/origin/master/master.env ]]; then\n set -o allexport\n source /etc/origin/master/master.env\nfi\nexec openshift start master api --config=/etc/origin/master/master-config.yaml --loglevel=${DEBUG_LOGLEVEL:-2}\n" > ], > "command": [ > "/bin/bash", > "-c" > ], > "image": "registry.reg-aws.openshift.com:443/openshift3/ose-control-plane:v3.10.0", > "livenessProbe": { > "httpGet": { > "path": "healthz", > "port": 8443, > "scheme": "HTTPS" > }, > "initialDelaySeconds": 45, > "timeoutSeconds": 10 > }, > "name": "api", > "readinessProbe": { > "httpGet": { > "path": "healthz/ready", > "port": 8443, > "scheme": "HTTPS" > }, > "initialDelaySeconds": 10, > "timeoutSeconds": 10 > }, > "securityContext": { > "privileged": true > }, > "volumeMounts": [ > { > "mountPath": "/etc/origin/master/", > "name": "master-config" > }, > { > "mountPath": "/etc/origin/cloudprovider/", > "name": "master-cloud-provider" > }, > { > "mountPath": "/var/lib/origin/", > "name": "master-data" > } > ] > } > ], > "hostNetwork": true, > "restartPolicy": "Always", > "volumes": [ > { > "hostPath": { > "path": "/etc/origin/master/" > }, > "name": "master-config" > }, > { > "hostPath": { > "path": "/etc/origin/cloudprovider" > }, > "name": "master-cloud-provider" > }, > { > "hostPath": { > "path": "/var/lib/origin" > }, > "name": "master-data" > } > ] > } > }, > "key": "spec.containers[0].image" > } > ], > "state": "present" >} >2018-06-12 17:08:02,676 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_utils/library/yedit.py >2018-06-12 17:08:02,920 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=controller.yaml) => { > "changed": true, > "failed": false, > "invocation": { > "module_args": { > "append": false, > "backup": false, > "backup_ext": ".20180612T170802", > "content": null, > "content_type": "yaml", > "curr_value": null, > "curr_value_format": "yaml", > "debug": false, > "edits": [ > { > "key": "spec.containers[0].image", > "value": "registry.reg-aws.openshift.com:443/openshift3/ose-control-plane:v3.10.0" > } > ], > "index": null, > "key": "", > "separator": ".", > "src": "/tmp/openshift-ansible-wQT6ZQ/controller.yaml", > "state": "present", > "update": false, > "value": null, > "value_type": "" > } > }, > "item": "controller.yaml", > "result": [ > { > "edit": { > "apiVersion": "v1", > "kind": "Pod", > "metadata": { > "annotations": { > "scheduler.alpha.kubernetes.io/critical-pod": "" > }, > "labels": { > "openshift.io/component": "controllers", > "openshift.io/control-plane": "true" > }, > "name": "master-controllers", > "namespace": "kube-system" > }, > "spec": { > "containers": [ > { > "args": [ > "#!/bin/bash\nset -euo pipefail\nif [[ -f /etc/origin/master/master.env ]]; then\n set -o allexport\n source /etc/origin/master/master.env\nfi\nexec openshift start master controllers --config=/etc/origin/master/master-config.yaml --listen=https://0.0.0.0:8444 --loglevel=${DEBUG_LOGLEVEL:-2}\n" > ], > "command": [ > "/bin/bash", > "-c" > ], > "image": "registry.reg-aws.openshift.com:443/openshift3/ose-control-plane:v3.10.0", > "livenessProbe": { > "httpGet": { > "path": "healthz", > "port": 8444, > "scheme": "HTTPS" > } > }, > "name": "controllers", > "securityContext": { > "privileged": true > }, > "volumeMounts": [ > { > "mountPath": "/etc/origin/master/", > "name": "master-config" > }, > { > "mountPath": "/etc/origin/cloudprovider/", > "name": "master-cloud-provider" > } > ] > } > ], > "hostNetwork": true, > "restartPolicy": "Always", > "volumes": [ > { > "hostPath": { > "path": "/etc/origin/master/" > }, > "name": "master-config" > }, > { > "hostPath": { > "path": "/etc/origin/cloudprovider" > }, > "name": "master-cloud-provider" > } > ] > } > }, > "key": "spec.containers[0].image" > } > ], > "state": "present" >} >2018-06-12 17:08:02,932 p=5860 u=root | TASK [openshift_control_plane : Update master static pod (api)] ***************************************************************************************************************************************************************************** >2018-06-12 17:08:02,932 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/static.yml:39 >2018-06-12 17:08:02,965 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_utils/library/yedit.py >2018-06-12 17:08:03,203 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "append": false, > "backup": false, > "backup_ext": ".20180612T170803", > "content": null, > "content_type": "yaml", > "curr_value": null, > "curr_value_format": "yaml", > "debug": false, > "edits": [ > { > "key": "spec.containers[0].livenessProbe.httpGet.port", > "value": "8443" > }, > { > "key": "spec.containers[0].readinessProbe.httpGet.port", > "value": "8443" > } > ], > "index": null, > "key": "", > "separator": ".", > "src": "/tmp/openshift-ansible-wQT6ZQ/apiserver.yaml", > "state": "present", > "update": false, > "value": null, > "value_type": "" > } > }, > "result": [], > "state": "present" >} >2018-06-12 17:08:03,212 p=5860 u=root | TASK [openshift_control_plane : ensure pod location exists] ********************************************************************************************************************************************************************************* >2018-06-12 17:08:03,212 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/static.yml:48 >2018-06-12 17:08:03,243 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:08:03,449 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "diff": { > "after": { > "path": "/etc/origin/node/pods/" > }, > "before": { > "path": "/etc/origin/node/pods/" > } > }, > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": "0755", > "original_basename": null, > "owner": null, > "path": "/etc/origin/node/pods/", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "directory", > "unsafe_writes": null, > "validate": null > } > }, > "mode": "0755", > "owner": "root", > "path": "/etc/origin/node/pods/", > "secontext": "unconfined_u:object_r:etc_t:s0", > "size": 68, > "state": "directory", > "uid": 0 >} >2018-06-12 17:08:03,458 p=5860 u=root | TASK [openshift_control_plane : Update master static pods] ********************************************************************************************************************************************************************************** >2018-06-12 17:08:03,459 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/static.yml:54 >2018-06-12 17:08:03,492 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py >2018-06-12 17:08:03,704 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=apiserver.yaml) => { > "changed": false, > "checksum": "7b9a781c1d7e35d48104f37df2294688a1041f61", > "dest": "/etc/origin/node/pods/apiserver.yaml", > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": false, > "content": null, > "delimiter": null, > "dest": "/etc/origin/node/pods/apiserver.yaml", > "directory_mode": null, > "follow": false, > "force": true, > "group": null, > "local_follow": null, > "mode": 384, > "original_basename": null, > "owner": null, > "regexp": null, > "remote_src": true, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "/tmp/openshift-ansible-wQT6ZQ/apiserver.yaml", > "unsafe_writes": null, > "validate": null > } > }, > "item": "apiserver.yaml", > "md5sum": "c82695dadff1ef07b34cf336efc40852", > "mode": "0600", > "owner": "root", > "secontext": "system_u:object_r:etc_t:s0", > "size": 1529, > "src": "/tmp/openshift-ansible-wQT6ZQ/apiserver.yaml", > "state": "file", > "uid": 0 >} >2018-06-12 17:08:03,717 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py >2018-06-12 17:08:03,922 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=controller.yaml) => { > "changed": false, > "checksum": "af12478855102fb4eb57f1110893a034de812102", > "dest": "/etc/origin/node/pods/controller.yaml", > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": false, > "content": null, > "delimiter": null, > "dest": "/etc/origin/node/pods/controller.yaml", > "directory_mode": null, > "follow": false, > "force": true, > "group": null, > "local_follow": null, > "mode": 384, > "original_basename": null, > "owner": null, > "regexp": null, > "remote_src": true, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "/tmp/openshift-ansible-wQT6ZQ/controller.yaml", > "unsafe_writes": null, > "validate": null > } > }, > "item": "controller.yaml", > "md5sum": "d89c1feeb38f7c6f775eeddfa169afa0", > "mode": "0600", > "owner": "root", > "secontext": "system_u:object_r:etc_t:s0", > "size": 1255, > "src": "/tmp/openshift-ansible-wQT6ZQ/controller.yaml", > "state": "file", > "uid": 0 >} >2018-06-12 17:08:03,931 p=5860 u=root | TASK [openshift_control_plane : Remove old files in /etc/sysconfig] ************************************************************************************************************************************************************************* >2018-06-12 17:08:03,932 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/static.yml:64 >2018-06-12 17:08:03,967 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:08:04,181 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/sysconfig/atomic-openshift-master-api) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": null, > "name": "/etc/sysconfig/atomic-openshift-master-api", > "original_basename": null, > "owner": null, > "path": "/etc/sysconfig/atomic-openshift-master-api", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "absent", > "unsafe_writes": null, > "validate": null > } > }, > "item": "/etc/sysconfig/atomic-openshift-master-api", > "path": "/etc/sysconfig/atomic-openshift-master-api", > "state": "absent" >} >2018-06-12 17:08:04,196 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:08:04,401 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/sysconfig/atomic-openshift-master-controllers) => { > "changed": false, > "failed": false, > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": null, > "name": "/etc/sysconfig/atomic-openshift-master-controllers", > "original_basename": null, > "owner": null, > "path": "/etc/sysconfig/atomic-openshift-master-controllers", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "absent", > "unsafe_writes": null, > "validate": null > } > }, > "item": "/etc/sysconfig/atomic-openshift-master-controllers", > "path": "/etc/sysconfig/atomic-openshift-master-controllers", > "state": "absent" >} >2018-06-12 17:08:04,413 p=5860 u=root | TASK [openshift_control_plane : Remove temporary directory] ********************************************************************************************************************************************************************************* >2018-06-12 17:08:04,413 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/static.yml:72 >2018-06-12 17:08:04,448 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py >2018-06-12 17:08:04,680 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "diff": { > "after": { > "path": "/tmp/openshift-ansible-wQT6ZQ", > "state": "absent" > }, > "before": { > "path": "/tmp/openshift-ansible-wQT6ZQ", > "state": "directory" > } > }, > "failed": false, > "invocation": { > "module_args": { > "attributes": null, > "backup": null, > "content": null, > "delimiter": null, > "diff_peek": null, > "directory_mode": null, > "follow": false, > "force": false, > "group": null, > "mode": null, > "name": "/tmp/openshift-ansible-wQT6ZQ", > "original_basename": null, > "owner": null, > "path": "/tmp/openshift-ansible-wQT6ZQ", > "recurse": false, > "regexp": null, > "remote_src": null, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": null, > "state": "absent", > "unsafe_writes": null, > "validate": null > } > }, > "path": "/tmp/openshift-ansible-wQT6ZQ", > "state": "absent" >} >2018-06-12 17:08:04,689 p=5860 u=root | TASK [openshift_control_plane : Establish the default bootstrap kubeconfig for masters] ***************************************************************************************************************************************************** >2018-06-12 17:08:04,689 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:136 >2018-06-12 17:08:04,719 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py >2018-06-12 17:08:04,931 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/origin/node/bootstrap.kubeconfig) => { > "changed": false, > "checksum": "170f72435bddbedde36a24675281c99ecfc63174", > "dest": "/etc/origin/node/bootstrap.kubeconfig", > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": false, > "content": null, > "delimiter": null, > "dest": "/etc/origin/node/bootstrap.kubeconfig", > "directory_mode": null, > "follow": false, > "force": true, > "group": null, > "local_follow": null, > "mode": 384, > "original_basename": null, > "owner": null, > "regexp": null, > "remote_src": true, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "/etc/origin/master/admin.kubeconfig", > "unsafe_writes": null, > "validate": null > } > }, > "item": "/etc/origin/node/bootstrap.kubeconfig", > "md5sum": "d49ac035895b5cee4a400fd0a4f1f94b", > "mode": "0600", > "owner": "root", > "secontext": "system_u:object_r:etc_t:s0", > "size": 7776, > "src": "/etc/origin/master/admin.kubeconfig", > "state": "file", > "uid": 0 >} >2018-06-12 17:08:04,943 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/copy.py >2018-06-12 17:08:05,146 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => (item=/etc/origin/node/node.kubeconfig) => { > "changed": false, > "checksum": "170f72435bddbedde36a24675281c99ecfc63174", > "dest": "/etc/origin/node/node.kubeconfig", > "failed": false, > "gid": 0, > "group": "root", > "invocation": { > "module_args": { > "attributes": null, > "backup": false, > "content": null, > "delimiter": null, > "dest": "/etc/origin/node/node.kubeconfig", > "directory_mode": null, > "follow": false, > "force": true, > "group": null, > "local_follow": null, > "mode": 384, > "original_basename": null, > "owner": null, > "regexp": null, > "remote_src": true, > "selevel": null, > "serole": null, > "setype": null, > "seuser": null, > "src": "/etc/origin/master/admin.kubeconfig", > "unsafe_writes": null, > "validate": null > } > }, > "item": "/etc/origin/node/node.kubeconfig", > "md5sum": "d49ac035895b5cee4a400fd0a4f1f94b", > "mode": "0600", > "owner": "root", > "secontext": "system_u:object_r:etc_t:s0", > "size": 7776, > "src": "/etc/origin/master/admin.kubeconfig", > "state": "file", > "uid": 0 >} >2018-06-12 17:08:05,156 p=5860 u=root | TASK [openshift_control_plane : Check status of control plane image pre-pull] *************************************************************************************************************************************************************** >2018-06-12 17:08:05,156 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:148 >2018-06-12 17:08:05,172 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:08:05,182 p=5860 u=root | TASK [openshift_control_plane : Check status of etcd image pre-pull] ************************************************************************************************************************************************************************ >2018-06-12 17:08:05,182 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:158 >2018-06-12 17:08:05,202 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:08:05,210 p=5860 u=root | TASK [openshift_control_plane : Start and enable self-hosting node] ************************************************************************************************************************************************************************* >2018-06-12 17:08:05,210 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:171 >2018-06-12 17:08:05,414 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/systemd.py >2018-06-12 17:08:06,144 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": true, > "enabled": true, > "failed": false, > "invocation": { > "module_args": { > "daemon_reload": false, > "enabled": true, > "masked": null, > "name": "atomic-openshift-node", > "no_block": false, > "state": "restarted", > "user": false > } > }, > "name": "atomic-openshift-node", > "state": "started", > "status": { > "ActiveEnterTimestamp": "Tue 2018-06-12 16:24:29 UTC", > "ActiveEnterTimestampMonotonic": "530035397", > "ActiveExitTimestampMonotonic": "0", > "ActiveState": "active", > "After": "system.slice -.mount chronyd.service dnsmasq.service systemd-journald.socket docker.service basic.target ntpd.service", > "AllowIsolate": "no", > "AmbientCapabilities": "0", > "AssertResult": "yes", > "AssertTimestamp": "Tue 2018-06-12 16:24:25 UTC", > "AssertTimestampMonotonic": "526556140", > "Before": "shutdown.target multi-user.target", > "BlockIOAccounting": "yes", > "BlockIOWeight": "18446744073709551615", > "CPUAccounting": "yes", > "CPUQuotaPerSecUSec": "infinity", > "CPUSchedulingPolicy": "0", > "CPUSchedulingPriority": "0", > "CPUSchedulingResetOnFork": "no", > "CPUShares": "18446744073709551615", > "CanIsolate": "no", > "CanReload": "no", > "CanStart": "yes", > "CanStop": "yes", > "CapabilityBoundingSet": "18446744073709551615", > "ConditionResult": "yes", > "ConditionTimestamp": "Tue 2018-06-12 16:24:25 UTC", > "ConditionTimestampMonotonic": "526556139", > "Conflicts": "shutdown.target", > "ControlGroup": "/system.slice/atomic-openshift-node.service", > "ControlPID": "0", > "DefaultDependencies": "yes", > "Delegate": "no", > "Description": "OpenShift Node", > "DevicePolicy": "auto", > "Documentation": "https://github.com/openshift/origin", > "EnvironmentFile": "/etc/sysconfig/atomic-openshift-node (ignore_errors=no)", > "ExecMainCode": "0", > "ExecMainExitTimestampMonotonic": "0", > "ExecMainPID": "11300", > "ExecMainStartTimestamp": "Tue 2018-06-12 16:24:25 UTC", > "ExecMainStartTimestampMonotonic": "526557793", > "ExecMainStatus": "0", > "ExecStart": "{ path=/usr/local/bin/openshift-node ; argv[]=/usr/local/bin/openshift-node ; ignore_errors=no ; start_time=[Tue 2018-06-12 16:24:25 UTC] ; stop_time=[n/a] ; pid=11300 ; code=(null) ; status=0/0 }", > "FailureAction": "none", > "FileDescriptorStoreMax": "0", > "FragmentPath": "/etc/systemd/system/atomic-openshift-node.service", > "GuessMainPID": "yes", > "IOScheduling": "0", > "Id": "atomic-openshift-node.service", > "IgnoreOnIsolate": "no", > "IgnoreOnSnapshot": "no", > "IgnoreSIGPIPE": "yes", > "InactiveEnterTimestampMonotonic": "0", > "InactiveExitTimestamp": "Tue 2018-06-12 16:24:25 UTC", > "InactiveExitTimestampMonotonic": "526557819", > "JobTimeoutAction": "none", > "JobTimeoutUSec": "0", > "KillMode": "control-group", > "KillSignal": "15", > "LimitAS": "18446744073709551615", > "LimitCORE": "18446744073709551615", > "LimitCPU": "18446744073709551615", > "LimitDATA": "18446744073709551615", > "LimitFSIZE": "18446744073709551615", > "LimitLOCKS": "18446744073709551615", > "LimitMEMLOCK": "65536", > "LimitMSGQUEUE": "819200", > "LimitNICE": "0", > "LimitNOFILE": "65536", > "LimitNPROC": "61510", > "LimitRSS": "18446744073709551615", > "LimitRTPRIO": "0", > "LimitRTTIME": "18446744073709551615", > "LimitSIGPENDING": "61510", > "LimitSTACK": "18446744073709551615", > "LoadState": "loaded", > "MainPID": "11300", > "MemoryAccounting": "yes", > "MemoryCurrent": "155357184", > "MemoryLimit": "18446744073709551615", > "MountFlags": "0", > "Names": "atomic-openshift-node.service", > "NeedDaemonReload": "no", > "Nice": "0", > "NoNewPrivileges": "no", > "NonBlocking": "no", > "NotifyAccess": "main", > "OOMScoreAdjust": "-999", > "OnFailureJobMode": "replace", > "PermissionsStartOnly": "no", > "PrivateDevices": "no", > "PrivateNetwork": "no", > "PrivateTmp": "no", > "ProtectHome": "no", > "ProtectSystem": "no", > "RefuseManualStart": "no", > "RefuseManualStop": "no", > "RemainAfterExit": "no", > "Requires": "basic.target -.mount", > "RequiresMountsFor": "/var/lib/origin", > "Restart": "always", > "RestartUSec": "5s", > "Result": "success", > "RootDirectoryStartOnly": "no", > "RuntimeDirectoryMode": "0755", > "SameProcessGroup": "no", > "SecureBits": "0", > "SendSIGHUP": "no", > "SendSIGKILL": "yes", > "Slice": "system.slice", > "StandardError": "inherit", > "StandardInput": "null", > "StandardOutput": "journal", > "StartLimitAction": "none", > "StartLimitBurst": "5", > "StartLimitInterval": "10000000", > "StartupBlockIOWeight": "18446744073709551615", > "StartupCPUShares": "18446744073709551615", > "StatusErrno": "0", > "StopWhenUnneeded": "no", > "SubState": "running", > "SyslogIdentifier": "atomic-openshift-node", > "SyslogLevelPrefix": "yes", > "SyslogPriority": "30", > "SystemCallErrorNumber": "0", > "TTYReset": "no", > "TTYVHangup": "no", > "TTYVTDisallocate": "no", > "TasksAccounting": "no", > "TasksCurrent": "18446744073709551615", > "TasksMax": "18446744073709551615", > "TimeoutStartUSec": "5min", > "TimeoutStopUSec": "1min 30s", > "TimerSlackNSec": "50000", > "Transient": "no", > "Type": "notify", > "UMask": "0022", > "UnitFilePreset": "disabled", > "UnitFileState": "enabled", > "WantedBy": "multi-user.target", > "Wants": "docker.service dnsmasq.service system.slice", > "WatchdogTimestamp": "Tue 2018-06-12 16:24:29 UTC", > "WatchdogTimestampMonotonic": "530035343", > "WatchdogUSec": "0", > "WorkingDirectory": "/var/lib/origin" > } >} >2018-06-12 17:08:06,164 p=5860 u=root | TASK [openshift_control_plane : Get node logs] ********************************************************************************************************************************************************************************************** >2018-06-12 17:08:06,164 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:181 >2018-06-12 17:08:06,183 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:08:06,192 p=5860 u=root | TASK [openshift_control_plane : debug] ****************************************************************************************************************************************************************************************************** >2018-06-12 17:08:06,192 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:185 >2018-06-12 17:08:06,208 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "skip_reason": "Conditional result was False" >} >2018-06-12 17:08:06,218 p=5860 u=root | TASK [openshift_control_plane : fail] ******************************************************************************************************************************************************************************************************* >2018-06-12 17:08:06,219 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:187 >2018-06-12 17:08:06,236 p=5860 u=root | skipping: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": false, > "skip_reason": "Conditional result was False", > "skipped": true >} >2018-06-12 17:08:06,245 p=5860 u=root | TASK [openshift_control_plane : Wait for control plane pods to appear] ********************************************************************************************************************************************************************** >2018-06-12 17:08:06,245 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:190 >2018-06-12 17:08:06,482 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:08:06,855 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (60 retries left).Result was: { > "attempts": 1, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:08:11,856 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:08:12,203 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (59 retries left).Result was: { > "attempts": 2, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:08:17,203 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:08:17,590 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (58 retries left).Result was: { > "attempts": 3, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:08:22,591 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:08:22,982 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (57 retries left).Result was: { > "attempts": 4, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:08:27,986 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:08:28,343 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (56 retries left).Result was: { > "attempts": 5, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:08:33,343 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:08:33,731 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (55 retries left).Result was: { > "attempts": 6, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:08:38,731 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:08:39,132 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (54 retries left).Result was: { > "attempts": 7, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:08:44,137 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:08:44,512 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (53 retries left).Result was: { > "attempts": 8, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:08:49,516 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:08:49,879 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (52 retries left).Result was: { > "attempts": 9, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:08:54,883 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:08:55,274 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (51 retries left).Result was: { > "attempts": 10, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:09:00,275 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:09:00,658 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (50 retries left).Result was: { > "attempts": 11, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:09:05,659 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:09:06,041 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (49 retries left).Result was: { > "attempts": 12, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:09:11,045 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:09:11,407 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (48 retries left).Result was: { > "attempts": 13, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:09:16,411 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:09:16,768 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (47 retries left).Result was: { > "attempts": 14, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:09:21,772 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:09:22,198 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (46 retries left).Result was: { > "attempts": 15, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:09:27,198 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:09:27,564 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (45 retries left).Result was: { > "attempts": 16, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:09:32,569 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:09:32,936 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (44 retries left).Result was: { > "attempts": 17, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:09:37,936 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:09:38,315 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (43 retries left).Result was: { > "attempts": 18, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:09:43,316 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:09:43,718 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (42 retries left).Result was: { > "attempts": 19, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:09:48,719 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:09:49,166 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (41 retries left).Result was: { > "attempts": 20, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:09:54,169 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:09:54,532 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (40 retries left).Result was: { > "attempts": 21, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:09:59,532 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:09:59,917 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (39 retries left).Result was: { > "attempts": 22, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:10:04,918 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:10:05,277 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (38 retries left).Result was: { > "attempts": 23, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:10:10,282 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:10:10,673 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (37 retries left).Result was: { > "attempts": 24, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:10:15,673 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:10:16,025 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (36 retries left).Result was: { > "attempts": 25, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:10:21,025 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:10:21,405 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (35 retries left).Result was: { > "attempts": 26, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:10:26,404 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:10:26,770 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (34 retries left).Result was: { > "attempts": 27, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:10:31,775 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:10:32,160 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (33 retries left).Result was: { > "attempts": 28, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:10:37,164 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:10:37,520 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (32 retries left).Result was: { > "attempts": 29, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:10:42,520 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:10:42,897 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (31 retries left).Result was: { > "attempts": 30, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:10:47,902 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:10:48,253 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (30 retries left).Result was: { > "attempts": 31, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:10:53,257 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:10:53,741 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (29 retries left).Result was: { > "attempts": 32, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:10:58,742 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:10:59,095 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (28 retries left).Result was: { > "attempts": 33, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:11:04,100 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:11:04,498 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (27 retries left).Result was: { > "attempts": 34, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:11:09,503 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:11:09,859 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (26 retries left).Result was: { > "attempts": 35, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:11:14,863 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:11:15,240 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (25 retries left).Result was: { > "attempts": 36, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:11:20,245 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:11:20,642 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (24 retries left).Result was: { > "attempts": 37, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:11:25,646 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:11:26,014 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (23 retries left).Result was: { > "attempts": 38, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:11:31,016 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:11:31,395 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (22 retries left).Result was: { > "attempts": 39, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:11:36,396 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:11:36,746 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (21 retries left).Result was: { > "attempts": 40, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:11:41,745 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:11:42,138 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (20 retries left).Result was: { > "attempts": 41, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:11:47,142 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:11:47,506 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (19 retries left).Result was: { > "attempts": 42, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:11:52,508 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:11:52,888 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (18 retries left).Result was: { > "attempts": 43, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:11:57,887 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:11:58,253 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (17 retries left).Result was: { > "attempts": 44, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:12:03,258 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:12:03,618 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (16 retries left).Result was: { > "attempts": 45, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:12:08,618 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:12:08,954 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (15 retries left).Result was: { > "attempts": 46, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:12:13,959 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:12:14,321 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (14 retries left).Result was: { > "attempts": 47, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:12:19,325 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:12:19,685 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (13 retries left).Result was: { > "attempts": 48, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:12:24,690 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:12:25,076 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (12 retries left).Result was: { > "attempts": 49, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:12:30,080 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:12:30,452 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (11 retries left).Result was: { > "attempts": 50, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:12:35,456 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:12:35,809 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (10 retries left).Result was: { > "attempts": 51, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:12:40,814 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:12:41,168 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (9 retries left).Result was: { > "attempts": 52, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:12:46,167 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:12:46,562 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (8 retries left).Result was: { > "attempts": 53, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:12:51,567 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:12:51,938 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (7 retries left).Result was: { > "attempts": 54, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:12:56,943 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:12:57,321 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (6 retries left).Result was: { > "attempts": 55, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:13:02,321 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:13:02,704 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (5 retries left).Result was: { > "attempts": 56, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:13:07,704 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:13:08,062 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (4 retries left).Result was: { > "attempts": 57, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:13:13,066 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:13:13,409 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (3 retries left).Result was: { > "attempts": 58, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:13:18,414 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:13:18,806 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (2 retries left).Result was: { > "attempts": 59, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:13:23,810 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:13:24,173 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (1 retries left).Result was: { > "attempts": 60, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:13:29,173 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:13:29,541 p=5860 u=root | The full traceback is: > File "/tmp/ansible_FIa_SJ/ansible_module_oc_obj.py", line 47, in <module> > import ruamel.yaml as yaml > >2018-06-12 17:13:29,542 p=5860 u=root | failed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] (item=etcd) => { > "attempts": 60, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-etcd-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "item": "etcd", > "msg": { > "cmd": "/usr/bin/oc get pod master-etcd-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > } >} >2018-06-12 17:13:29,558 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:13:29,939 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (60 retries left).Result was: { > "attempts": 1, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:13:34,944 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:13:35,290 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (59 retries left).Result was: { > "attempts": 2, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:13:40,290 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:13:40,658 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (58 retries left).Result was: { > "attempts": 3, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:13:45,658 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:13:45,980 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (57 retries left).Result was: { > "attempts": 4, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:13:50,984 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:13:51,362 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (56 retries left).Result was: { > "attempts": 5, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:13:56,367 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:13:56,733 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (55 retries left).Result was: { > "attempts": 6, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:14:01,733 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:14:02,126 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (54 retries left).Result was: { > "attempts": 7, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:14:07,131 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:14:07,505 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (53 retries left).Result was: { > "attempts": 8, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:14:12,510 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:14:12,858 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (52 retries left).Result was: { > "attempts": 9, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:14:17,863 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:14:18,248 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (51 retries left).Result was: { > "attempts": 10, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:14:23,250 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:14:23,614 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (50 retries left).Result was: { > "attempts": 11, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:14:28,616 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:14:28,992 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (49 retries left).Result was: { > "attempts": 12, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:14:33,997 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:14:34,335 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (48 retries left).Result was: { > "attempts": 13, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:14:39,339 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:14:39,683 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (47 retries left).Result was: { > "attempts": 14, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:14:44,686 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:14:45,093 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (46 retries left).Result was: { > "attempts": 15, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:14:50,093 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:14:50,477 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (45 retries left).Result was: { > "attempts": 16, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:14:55,482 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:14:55,835 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (44 retries left).Result was: { > "attempts": 17, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:15:00,839 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:15:01,226 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (43 retries left).Result was: { > "attempts": 18, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:15:06,226 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:15:06,603 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (42 retries left).Result was: { > "attempts": 19, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:15:11,608 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:15:11,934 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (41 retries left).Result was: { > "attempts": 20, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:15:16,934 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:15:17,327 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (40 retries left).Result was: { > "attempts": 21, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:15:22,328 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:15:22,672 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (39 retries left).Result was: { > "attempts": 22, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:15:27,676 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:15:28,030 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (38 retries left).Result was: { > "attempts": 23, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:15:33,034 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:15:33,377 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (37 retries left).Result was: { > "attempts": 24, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:15:38,377 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:15:38,757 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (36 retries left).Result was: { > "attempts": 25, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:15:43,758 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:15:44,128 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (35 retries left).Result was: { > "attempts": 26, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:15:49,132 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:15:49,489 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (34 retries left).Result was: { > "attempts": 27, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:15:54,489 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:15:54,849 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (33 retries left).Result was: { > "attempts": 28, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:15:59,848 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:16:00,238 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (32 retries left).Result was: { > "attempts": 29, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:16:05,243 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:16:05,633 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (31 retries left).Result was: { > "attempts": 30, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:16:10,637 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:16:11,017 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (30 retries left).Result was: { > "attempts": 31, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:16:16,017 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:16:16,391 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (29 retries left).Result was: { > "attempts": 32, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:16:21,395 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:16:21,761 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (28 retries left).Result was: { > "attempts": 33, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:16:26,766 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:16:27,108 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (27 retries left).Result was: { > "attempts": 34, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:16:32,111 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:16:32,488 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (26 retries left).Result was: { > "attempts": 35, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:16:37,493 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:16:37,857 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (25 retries left).Result was: { > "attempts": 36, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:16:42,862 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:16:43,265 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (24 retries left).Result was: { > "attempts": 37, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:16:48,269 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:16:48,660 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (23 retries left).Result was: { > "attempts": 38, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:16:53,664 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:16:54,049 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (22 retries left).Result was: { > "attempts": 39, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:16:59,054 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:16:59,398 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (21 retries left).Result was: { > "attempts": 40, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:17:04,398 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:17:04,770 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (20 retries left).Result was: { > "attempts": 41, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:17:09,775 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:17:10,153 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (19 retries left).Result was: { > "attempts": 42, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:17:15,157 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:17:15,526 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (18 retries left).Result was: { > "attempts": 43, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:17:20,528 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:17:20,872 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (17 retries left).Result was: { > "attempts": 44, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:17:25,877 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:17:26,237 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (16 retries left).Result was: { > "attempts": 45, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:17:31,241 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:17:31,631 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (15 retries left).Result was: { > "attempts": 46, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:17:36,635 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:17:36,992 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (14 retries left).Result was: { > "attempts": 47, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:17:41,993 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:17:42,361 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (13 retries left).Result was: { > "attempts": 48, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:17:47,365 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:17:47,751 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (12 retries left).Result was: { > "attempts": 49, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:17:52,755 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:17:53,125 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (11 retries left).Result was: { > "attempts": 50, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:17:58,126 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:17:58,473 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (10 retries left).Result was: { > "attempts": 51, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:18:03,476 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:18:03,840 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (9 retries left).Result was: { > "attempts": 52, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:18:08,845 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:18:09,197 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (8 retries left).Result was: { > "attempts": 53, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:18:14,202 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:18:14,592 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (7 retries left).Result was: { > "attempts": 54, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:18:19,597 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:18:19,981 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (6 retries left).Result was: { > "attempts": 55, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:18:24,985 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:18:25,365 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (5 retries left).Result was: { > "attempts": 56, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:18:30,369 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:18:30,743 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (4 retries left).Result was: { > "attempts": 57, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:18:35,743 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:18:36,117 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (3 retries left).Result was: { > "attempts": 58, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:18:41,116 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:18:41,502 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (2 retries left).Result was: { > "attempts": 59, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:18:46,502 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:18:46,855 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (1 retries left).Result was: { > "attempts": 60, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:18:51,855 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:18:52,230 p=5860 u=root | The full traceback is: > File "/tmp/ansible_rgqHq6/ansible_module_oc_obj.py", line 47, in <module> > import ruamel.yaml as yaml > >2018-06-12 17:18:52,231 p=5860 u=root | failed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] (item=api) => { > "attempts": 60, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-api-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "item": "api", > "msg": { > "cmd": "/usr/bin/oc get pod master-api-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > } >} >2018-06-12 17:18:52,245 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:18:52,616 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (60 retries left).Result was: { > "attempts": 1, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:18:57,621 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:18:57,993 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (59 retries left).Result was: { > "attempts": 2, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:19:02,997 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:19:03,409 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (58 retries left).Result was: { > "attempts": 3, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:19:08,410 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:19:08,787 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (57 retries left).Result was: { > "attempts": 4, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:19:13,792 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:19:14,167 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (56 retries left).Result was: { > "attempts": 5, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:19:19,167 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:19:19,537 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (55 retries left).Result was: { > "attempts": 6, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:19:24,537 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:19:24,963 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (54 retries left).Result was: { > "attempts": 7, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:19:29,963 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:19:30,350 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (53 retries left).Result was: { > "attempts": 8, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:19:35,355 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:19:35,752 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (52 retries left).Result was: { > "attempts": 9, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:19:40,757 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:19:41,108 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (51 retries left).Result was: { > "attempts": 10, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:19:46,112 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:19:46,486 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (50 retries left).Result was: { > "attempts": 11, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:19:51,488 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:19:51,859 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (49 retries left).Result was: { > "attempts": 12, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:19:56,861 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:19:57,236 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (48 retries left).Result was: { > "attempts": 13, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:20:02,236 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:20:02,610 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (47 retries left).Result was: { > "attempts": 14, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:20:07,614 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:20:07,963 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (46 retries left).Result was: { > "attempts": 15, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:20:12,965 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:20:13,338 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (45 retries left).Result was: { > "attempts": 16, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:20:18,341 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:20:18,698 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (44 retries left).Result was: { > "attempts": 17, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:20:23,702 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:20:24,069 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (43 retries left).Result was: { > "attempts": 18, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:20:29,074 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:20:29,403 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (42 retries left).Result was: { > "attempts": 19, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:20:34,407 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:20:34,764 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (41 retries left).Result was: { > "attempts": 20, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:20:39,769 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:20:40,136 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (40 retries left).Result was: { > "attempts": 21, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:20:45,136 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:20:45,501 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (39 retries left).Result was: { > "attempts": 22, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:20:50,506 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:20:50,873 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (38 retries left).Result was: { > "attempts": 23, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:20:55,878 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:20:56,232 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (37 retries left).Result was: { > "attempts": 24, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:21:01,232 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:21:01,626 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (36 retries left).Result was: { > "attempts": 25, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:21:06,628 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:21:06,981 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (35 retries left).Result was: { > "attempts": 26, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:21:11,985 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:21:12,379 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (34 retries left).Result was: { > "attempts": 27, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:21:17,384 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:21:17,743 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (33 retries left).Result was: { > "attempts": 28, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:21:22,746 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:21:23,124 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (32 retries left).Result was: { > "attempts": 29, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:21:28,129 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:21:28,485 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (31 retries left).Result was: { > "attempts": 30, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:21:33,489 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:21:33,864 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (30 retries left).Result was: { > "attempts": 31, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:21:38,864 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:21:39,214 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (29 retries left).Result was: { > "attempts": 32, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:21:44,216 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:21:44,557 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (28 retries left).Result was: { > "attempts": 33, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:21:49,562 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:21:49,925 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (27 retries left).Result was: { > "attempts": 34, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:21:54,926 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:21:55,298 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (26 retries left).Result was: { > "attempts": 35, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:22:00,303 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:22:00,674 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (25 retries left).Result was: { > "attempts": 36, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:22:05,675 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:22:06,005 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (24 retries left).Result was: { > "attempts": 37, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:22:11,005 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:22:11,373 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (23 retries left).Result was: { > "attempts": 38, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:22:16,378 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:22:16,759 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (22 retries left).Result was: { > "attempts": 39, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:22:21,759 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:22:22,141 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (21 retries left).Result was: { > "attempts": 40, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:22:27,145 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:22:27,537 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (20 retries left).Result was: { > "attempts": 41, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:22:32,541 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:22:32,884 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (19 retries left).Result was: { > "attempts": 42, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:22:37,888 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:22:38,245 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (18 retries left).Result was: { > "attempts": 43, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:22:43,249 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:22:43,635 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (17 retries left).Result was: { > "attempts": 44, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:22:48,639 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:22:49,030 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (16 retries left).Result was: { > "attempts": 45, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:22:54,034 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:22:54,365 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (15 retries left).Result was: { > "attempts": 46, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:22:59,370 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:22:59,741 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (14 retries left).Result was: { > "attempts": 47, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:23:04,740 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:23:05,114 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (13 retries left).Result was: { > "attempts": 48, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:23:10,117 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:23:10,483 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (12 retries left).Result was: { > "attempts": 49, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:23:15,487 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:23:15,855 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (11 retries left).Result was: { > "attempts": 50, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:23:20,860 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:23:21,206 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (10 retries left).Result was: { > "attempts": 51, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:23:26,210 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:23:26,551 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (9 retries left).Result was: { > "attempts": 52, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:23:31,556 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:23:31,924 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (8 retries left).Result was: { > "attempts": 53, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:23:36,928 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:23:37,258 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (7 retries left).Result was: { > "attempts": 54, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:23:42,258 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:23:42,633 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (6 retries left).Result was: { > "attempts": 55, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:23:47,638 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:23:48,030 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (5 retries left).Result was: { > "attempts": 56, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:23:53,034 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:23:53,366 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (4 retries left).Result was: { > "attempts": 57, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:23:58,365 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:23:58,707 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (3 retries left).Result was: { > "attempts": 58, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:24:03,711 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:24:04,177 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (2 retries left).Result was: { > "attempts": 59, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:24:09,181 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:24:09,541 p=5860 u=root | FAILED - RETRYING: Wait for control plane pods to appear (1 retries left).Result was: { > "attempts": 60, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > }, > "retries": 61 >} >2018-06-12 17:24:14,544 p=5860 u=root | Using module file /root/openshift-ansible/roles/lib_openshift/library/oc_obj.py >2018-06-12 17:24:14,944 p=5860 u=root | The full traceback is: > File "/tmp/ansible_cHjpaL/ansible_module_oc_obj.py", line 47, in <module> > import ruamel.yaml as yaml > >2018-06-12 17:24:14,944 p=5860 u=root | failed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] (item=controllers) => { > "attempts": 60, > "changed": false, > "failed": true, > "invocation": { > "module_args": { > "all_namespaces": null, > "content": null, > "debug": false, > "delete_after": false, > "field_selector": null, > "files": null, > "force": false, > "kind": "pod", > "kubeconfig": "/etc/origin/master/admin.kubeconfig", > "name": "master-controllers-ip-172-31-50-118.us-west-2.compute.internal", > "namespace": "kube-system", > "selector": null, > "state": "list" > } > }, > "item": "controllers", > "msg": { > "cmd": "/usr/bin/oc get pod master-controllers-ip-172-31-50-118.us-west-2.compute.internal -o json -n kube-system", > "results": [ > {} > ], > "returncode": 1, > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\n", > "stdout": "" > } >} >2018-06-12 17:24:14,946 p=5860 u=root | ...ignoring >2018-06-12 17:24:14,956 p=5860 u=root | TASK [openshift_control_plane : Check status in the kube-system namespace] ****************************************************************************************************************************************************************** >2018-06-12 17:24:14,956 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:211 >2018-06-12 17:24:14,989 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:24:15,297 p=5860 u=root | fatal: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com]: FAILED! => { > "changed": true, > "cmd": [ > "oc", > "status", > "--config=/etc/origin/master/admin.kubeconfig", > "-n", > "kube-system" > ], > "delta": "0:00:00.105900", > "end": "2018-06-12 17:24:15.280412", > "failed": true, > "invocation": { > "module_args": { > "_raw_params": "oc status --config=/etc/origin/master/admin.kubeconfig -n kube-system", > "_uses_shell": false, > "chdir": null, > "creates": null, > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "msg": "non-zero return code", > "rc": 1, > "start": "2018-06-12 17:24:15.174512", > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\nThe connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\nThe connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\nThe connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\nThe connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\nThe connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\nThe connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\nThe connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\nThe connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\nThe connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\nThe connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\nThe connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\nThe connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\nThe connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\nThe connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?\nThe connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?", > "stderr_lines": [ > "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?", > "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?", > "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?", > "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?", > "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?", > "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?", > "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?", > "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?", > "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?", > "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?", > "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?", > "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?", > "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?", > "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?", > "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?", > "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?" > ], > "stdout": "", > "stdout_lines": [] >} >2018-06-12 17:24:15,298 p=5860 u=root | ...ignoring >2018-06-12 17:24:15,307 p=5860 u=root | TASK [openshift_control_plane : debug] ****************************************************************************************************************************************************************************************************** >2018-06-12 17:24:15,307 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:216 >2018-06-12 17:24:15,342 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "msg": [] >} >2018-06-12 17:24:15,350 p=5860 u=root | TASK [openshift_control_plane : Get pods in the kube-system namespace] ********************************************************************************************************************************************************************** >2018-06-12 17:24:15,350 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:218 >2018-06-12 17:24:15,382 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:24:15,713 p=5860 u=root | fatal: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com]: FAILED! => { > "changed": true, > "cmd": [ > "oc", > "get", > "pods", > "--config=/etc/origin/master/admin.kubeconfig", > "-n", > "kube-system", > "-o", > "wide" > ], > "delta": "0:00:00.126694", > "end": "2018-06-12 17:24:15.696761", > "failed": true, > "invocation": { > "module_args": { > "_raw_params": "oc get pods --config=/etc/origin/master/admin.kubeconfig -n kube-system -o wide", > "_uses_shell": false, > "chdir": null, > "creates": null, > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "msg": "non-zero return code", > "rc": 1, > "start": "2018-06-12 17:24:15.570067", > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?", > "stderr_lines": [ > "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?" > ], > "stdout": "", > "stdout_lines": [] >} >2018-06-12 17:24:15,713 p=5860 u=root | ...ignoring >2018-06-12 17:24:15,722 p=5860 u=root | TASK [openshift_control_plane : debug] ****************************************************************************************************************************************************************************************************** >2018-06-12 17:24:15,723 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:223 >2018-06-12 17:24:15,756 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "msg": [] >} >2018-06-12 17:24:15,765 p=5860 u=root | TASK [openshift_control_plane : Get events in the kube-system namespace] ******************************************************************************************************************************************************************** >2018-06-12 17:24:15,765 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:225 >2018-06-12 17:24:15,799 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:24:16,137 p=5860 u=root | fatal: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com]: FAILED! => { > "changed": true, > "cmd": [ > "oc", > "get", > "events", > "--config=/etc/origin/master/admin.kubeconfig", > "-n", > "kube-system" > ], > "delta": "0:00:00.134605", > "end": "2018-06-12 17:24:16.121832", > "failed": true, > "invocation": { > "module_args": { > "_raw_params": "oc get events --config=/etc/origin/master/admin.kubeconfig -n kube-system", > "_uses_shell": false, > "chdir": null, > "creates": null, > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "msg": "non-zero return code", > "rc": 1, > "start": "2018-06-12 17:24:15.987227", > "stderr": "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?", > "stderr_lines": [ > "The connection to the server ip-172-31-50-118.us-west-2.compute.internal:8443 was refused - did you specify the right host or port?" > ], > "stdout": "", > "stdout_lines": [] >} >2018-06-12 17:24:16,138 p=5860 u=root | ...ignoring >2018-06-12 17:24:16,147 p=5860 u=root | TASK [openshift_control_plane : debug] ****************************************************************************************************************************************************************************************************** >2018-06-12 17:24:16,147 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:230 >2018-06-12 17:24:16,180 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "msg": [] >} >2018-06-12 17:24:16,188 p=5860 u=root | TASK [openshift_control_plane : Get node logs] ********************************************************************************************************************************************************************************************** >2018-06-12 17:24:16,188 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:232 >2018-06-12 17:24:16,220 p=5860 u=root | Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py >2018-06-12 17:24:16,557 p=5860 u=root | changed: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "changed": true, > "cmd": [ > "journalctl", > "--no-pager", > "-n", > "300", > "-u", > "atomic-openshift-node" > ], > "delta": "0:00:00.023517", > "end": "2018-06-12 17:24:16.426057", > "failed": false, > "invocation": { > "module_args": { > "_raw_params": "journalctl --no-pager -n 300 -u atomic-openshift-node", > "_uses_shell": false, > "chdir": null, > "creates": null, > "executable": null, > "removes": null, > "stdin": null, > "warn": true > } > }, > "rc": 0, > "start": "2018-06-12 17:24:16.402540", > "stderr": "", > "stderr_lines": [], > "stdout": "-- Logs begin at Tue 2018-06-12 06:31:10 UTC, end at Tue 2018-06-12 17:24:16 UTC. --\nJun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.018979 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge\nJun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.018986 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b\nJun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.018992 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2\nJun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.025888 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.025918 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.025925 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.025932 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:47.195342 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.326271 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" can be found. Need to start a new one\nJun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:47.339606 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:47.391307 21156 container.go:507] Failed to update stats for container \"/libcontainer_23480_systemd_test_default.slice\": failed to parse memory.kmem.limit_in_bytes - read /sys/fs/cgroup/memory/libcontainer_23480_systemd_test_default.slice/memory.kmem.limit_in_bytes: no such device, continuing to push stats\nJun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:47.497940 21156 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"\nJun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:47.497979 21156 kuberuntime_sandbox.go:54] CreatePodSandbox for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"\nJun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:47.497993 21156 kuberuntime_manager.go:646] createPodSandbox for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"\nJun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:47.498043 21156 pod_workers.go:186] Error syncing pod a39276703b0f3dfabe149ef43c57d6ea (\"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\"), skipping: failed to \"CreatePodSandbox\" for \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" with CreatePodSandboxError: \"CreatePodSandbox for pod \\\"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\\\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \\\"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\\\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\\\\\\"No such device or address\\\\\\\"\\\"\"\nJun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:47.529573 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.033385 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach\nJun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.033406 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge\nJun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.033412 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b\nJun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.033417 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2\nJun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.038157 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.038180 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.038191 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.038201 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:48.051092 21156 status_manager.go:461] Failed to get status for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\": Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/kube-system/pods/master-api-ip-172-31-50-118.us-west-2.compute.internal: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.196728 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.338191 21156 eviction_manager.go:246] eviction manager: failed to get get summary stats: failed to get node info: node \"ip-172-31-50-118.us-west-2.compute.internal\" not found\nJun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.338634 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" can be found. Need to start a new one\nJun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.355974 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.491896 21156 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"\nJun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.491927 21156 kuberuntime_sandbox.go:54] CreatePodSandbox for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"\nJun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.491937 21156 kuberuntime_manager.go:646] createPodSandbox for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"\nJun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.491988 21156 pod_workers.go:186] Error syncing pod 470e9f0cfe88912707039722e46eb507 (\"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\"), skipping: failed to \"CreatePodSandbox\" for \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" with CreatePodSandboxError: \"CreatePodSandbox for pod \\\"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\\\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \\\"master-api-ip-172-31-50-118.us-west-2.compute.internal\\\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\\\\\\"No such device or address\\\\\\\"\\\"\"\nJun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.537455 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:48.584637 21156 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d\nJun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.584759 21156 kubelet.go:2147] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized\nJun 12 17:23:49 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:49.198291 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:49 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:49.357409 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:49 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:49.552761 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:50 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:50.199997 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:50 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:50.358762 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:50 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:50.564072 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:51 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:51.209140 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:51 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:51.374649 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:51 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:51.572262 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:52 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:52.215864 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:52 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:52.380286 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:52 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:52.589746 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:53 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:53.096650 21156 event.go:209] Unable to write event: 'Patch https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/default/events/ip-172-31-50-118.us-west-2.compute.internal.153778adc38c11da: dial tcp 172.31.50.118:8443: getsockopt: connection refused' (may retry after sleeping)\nJun 12 17:23:53 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:53.217611 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:53 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:53.400469 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:53 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:53.586030 21156 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d\nJun 12 17:23:53 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:53.586155 21156 kubelet.go:2147] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized\nJun 12 17:23:53 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:53.608455 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:54 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:54.232269 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:54 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:54.419283 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:54 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:54.616766 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:55 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:55.235563 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:55 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:55.429965 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:55 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:55.630545 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:56 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:56.249007 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:56 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:56.443394 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:56 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:56.636076 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:57.013034 21156 kubelet_node_status.go:1109] Failed to set some node status fields: failed to get node address from cloud provider: Timeout after 10s\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.015754 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.015779 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.015786 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.015793 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.015807 21156 kubelet_node_status.go:82] Attempting to register node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:57.018850 21156 kubelet_node_status.go:1109] Failed to set some node status fields: failed to get node address from cloud provider: Timeout after 10s\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:57.019276 21156 kubelet_node_status.go:1109] Failed to set some node status fields: failed to get node address from cloud provider: Timeout after 10s\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021220 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021242 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021252 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021262 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:57.021294 21156 pod_container_deletor.go:77] Container \"9ea9e50f8912f839cd0c60e36604346ee02db4dce6af954244ed6609ada1f62a\" not found in pod's containers\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021336 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021334 21156 kubelet.go:1923] SyncLoop (PLEG): \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\", event: &pleg.PodLifecycleEvent{ID:\"470e9f0cfe88912707039722e46eb507\", Type:\"ContainerDied\", Data:\"f0e8ad0ba290d63663c96d7e30ded575e4fedb551bffde581d588443135ff741\"}\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021354 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021362 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021369 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021392 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021405 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021412 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021419 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021511 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021522 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021530 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021537 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.023327 21156 kubelet_node_status.go:106] Unable to register node \"ip-172-31-50-118.us-west-2.compute.internal\" with API server: Post https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.026472 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.026494 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.026505 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.026512 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:57.042806 21156 status_manager.go:461] Failed to get status for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\": Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/kube-system/pods/master-etcd-ip-172-31-50-118.us-west-2.compute.internal: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.263265 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.321768 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" can be found. Need to start a new one\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.326817 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" can be found. Need to start a new one\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:57.443887 21156 container.go:507] Failed to update stats for container \"/libcontainer_23748_systemd_test_default.slice\": open /sys/fs/cgroup/cpu,cpuacct/libcontainer_23748_systemd_test_default.slice/cpuacct.stat: no such file or directory, continuing to push stats\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.458554 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.504808 21156 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.504868 21156 kuberuntime_sandbox.go:54] CreatePodSandbox for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.504882 21156 kuberuntime_manager.go:646] createPodSandbox for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.504949 21156 pod_workers.go:186] Error syncing pod 61801a9363130db3b3a59da18389cb26 (\"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\"), skipping: failed to \"CreatePodSandbox\" for \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" with CreatePodSandboxError: \"CreatePodSandbox for pod \\\"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\\\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \\\"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\\\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\\\\\\"No such device or address\\\\\\\"\\\"\"\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:57.519462 21156 container.go:507] Failed to update stats for container \"/libcontainer_23771_systemd_test_default.slice\": read /sys/fs/cgroup/cpu,cpuacct/libcontainer_23771_systemd_test_default.slice/cpuacct.usage: no such device, continuing to push stats\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.525378 21156 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.525414 21156 kuberuntime_sandbox.go:54] CreatePodSandbox for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.525429 21156 kuberuntime_manager.go:646] createPodSandbox for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.525487 21156 pod_workers.go:186] Error syncing pod a39276703b0f3dfabe149ef43c57d6ea (\"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\"), skipping: failed to \"CreatePodSandbox\" for \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" with CreatePodSandboxError: \"CreatePodSandbox for pod \\\"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\\\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \\\"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\\\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\\\\\\"No such device or address\\\\\\\"\\\"\"\nJun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.653537 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.105603 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.105628 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.105635 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.105641 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110206 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110233 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110243 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110254 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:58.110277 21156 pod_container_deletor.go:77] Container \"f0e8ad0ba290d63663c96d7e30ded575e4fedb551bffde581d588443135ff741\" not found in pod's containers\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110295 21156 kubelet.go:1923] SyncLoop (PLEG): \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\", event: &pleg.PodLifecycleEvent{ID:\"a39276703b0f3dfabe149ef43c57d6ea\", Type:\"ContainerDied\", Data:\"27daa278953e954d9f601adf852c687c5e4c7e471f98f073508733054bec8334\"}\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110347 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110358 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110367 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110373 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110438 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110456 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110462 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110466 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.114721 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.114745 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.114754 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.114761 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:58.136018 21156 status_manager.go:461] Failed to get status for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\": Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/kube-system/pods/master-api-ip-172-31-50-118.us-west-2.compute.internal: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.287633 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.338327 21156 eviction_manager.go:246] eviction manager: failed to get get summary stats: failed to get node info: node \"ip-172-31-50-118.us-west-2.compute.internal\" not found\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.415182 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" can be found. Need to start a new one\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.471557 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:58.587438 21156 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.587570 21156 kubelet.go:2147] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.592801 21156 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.592874 21156 kuberuntime_sandbox.go:54] CreatePodSandbox for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.592889 21156 kuberuntime_manager.go:646] createPodSandbox for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.592944 21156 pod_workers.go:186] Error syncing pod 470e9f0cfe88912707039722e46eb507 (\"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\"), skipping: failed to \"CreatePodSandbox\" for \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" with CreatePodSandboxError: \"CreatePodSandbox for pod \\\"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\\\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \\\"master-api-ip-172-31-50-118.us-west-2.compute.internal\\\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\\\\\\"No such device or address\\\\\\\"\\\"\"\nJun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.663583 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:59 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:59.301482 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:59 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:59.494494 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:23:59 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:59.686294 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:00 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:00.305572 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:00 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:00.514187 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:00 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:00.698598 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:01 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:01.313070 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:01 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:01.515505 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:01 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:01.719814 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:02 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:02.361241 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:02 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:02.516911 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:02 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:02.721142 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:03 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:03.118101 21156 event.go:209] Unable to write event: 'Patch https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/default/events/ip-172-31-50-118.us-west-2.compute.internal.153778adc38c11da: dial tcp 172.31.50.118:8443: getsockopt: connection refused' (may retry after sleeping)\nJun 12 17:24:03 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:03.377193 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:03 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:03.534349 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:03 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:03.588855 21156 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d\nJun 12 17:24:03 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:03.588985 21156 kubelet.go:2147] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized\nJun 12 17:24:03 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:03.731697 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.023473 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach\nJun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.023504 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge\nJun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.023513 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b\nJun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.023519 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2\nJun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.029093 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.029122 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.029131 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.029139 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:04.030494 21156 status_manager.go:461] Failed to get status for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\": Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/kube-system/pods/master-controllers-ip-172-31-50-118.us-west-2.compute.internal: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.329529 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" can be found. Need to start a new one\nJun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.393999 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.485906 21156 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"\nJun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.485950 21156 kuberuntime_sandbox.go:54] CreatePodSandbox for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"\nJun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.485966 21156 kuberuntime_manager.go:646] createPodSandbox for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"\nJun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.486016 21156 pod_workers.go:186] Error syncing pod 61801a9363130db3b3a59da18389cb26 (\"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\"), skipping: failed to \"CreatePodSandbox\" for \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" with CreatePodSandboxError: \"CreatePodSandbox for pod \\\"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\\\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \\\"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\\\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\\\\\\"No such device or address\\\\\\\"\\\"\"\nJun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.545993 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.547730 21156 certificate_manager.go:287] Rotating certificates\nJun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.562264 21156 certificate_manager.go:299] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post https://ip-172-31-50-118.us-west-2.compute.internal:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.744585 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:05 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:05.406543 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:05 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:05.555678 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:05 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:05.746085 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:06 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:06.423411 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:06 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:06.571903 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:06 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:06.747470 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:07 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:07.439867 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:07 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:07.587813 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:07 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:07.749407 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.110481 21156 kubelet_node_status.go:1109] Failed to set some node status fields: failed to get node address from cloud provider: Timeout after 10s\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.110573 21156 kubelet_node_status.go:1109] Failed to set some node status fields: failed to get node address from cloud provider: Timeout after 10s\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113258 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113281 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113291 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113300 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113313 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.113322 21156 pod_container_deletor.go:77] Container \"27daa278953e954d9f601adf852c687c5e4c7e471f98f073508733054bec8334\" not found in pod's containers\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113334 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113345 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113342 21156 kubelet.go:1923] SyncLoop (PLEG): \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\", event: &pleg.PodLifecycleEvent{ID:\"61801a9363130db3b3a59da18389cb26\", Type:\"ContainerDied\", Data:\"5b3db299b2ce21041a48f76ae1fe82062a5b7329e8e432238ec5c7a800d17929\"}\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113360 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113393 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113402 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113408 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113413 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113484 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113493 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113500 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113506 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.116910 21156 status_manager.go:461] Failed to get status for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\": Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/kube-system/pods/master-etcd-ip-172-31-50-118.us-west-2.compute.internal: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.118498 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.118511 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.118517 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.118523 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.118533 21156 kubelet_node_status.go:82] Attempting to register node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.123268 21156 kubelet_node_status.go:106] Unable to register node \"ip-172-31-50-118.us-west-2.compute.internal\" with API server: Post https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.338472 21156 eviction_manager.go:246] eviction manager: failed to get get summary stats: failed to get node info: node \"ip-172-31-50-118.us-west-2.compute.internal\" not found\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.413736 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" can be found. Need to start a new one\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.417337 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"4afcd403f088ff280172ac1d15644655be4146b65d03f354dcc02f7defd0c67b\" could not be found. Proceed without further sandbox information.\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.418261 21156 remote_runtime.go:115] StopPodSandbox \"4afcd403f088ff280172ac1d15644655be4146b65d03f354dcc02f7defd0c67b\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.418301 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"4afcd403f088ff280172ac1d15644655be4146b65d03f354dcc02f7defd0c67b\"}\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.419029 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"0cf1ed3e41e68c350643b700625f9f5c48f584cb7a9a1f17bc73835f09d5dc03\" could not be found. Proceed without further sandbox information.\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.419900 21156 remote_runtime.go:115] StopPodSandbox \"0cf1ed3e41e68c350643b700625f9f5c48f584cb7a9a1f17bc73835f09d5dc03\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.419928 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"0cf1ed3e41e68c350643b700625f9f5c48f584cb7a9a1f17bc73835f09d5dc03\"}\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.420658 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"24c82cb9d9bc263bf71363ce0fa80b6910cfa02d5d98c46db4d69e51d10c3f59\" could not be found. Proceed without further sandbox information.\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.421547 21156 remote_runtime.go:115] StopPodSandbox \"24c82cb9d9bc263bf71363ce0fa80b6910cfa02d5d98c46db4d69e51d10c3f59\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.421574 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"24c82cb9d9bc263bf71363ce0fa80b6910cfa02d5d98c46db4d69e51d10c3f59\"}\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.422261 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"955de5c8a56d96356eb4c0ee86a30f20b8d9bd97b1f0bd0da287822ae3cd8e15\" could not be found. Proceed without further sandbox information.\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.423117 21156 remote_runtime.go:115] StopPodSandbox \"955de5c8a56d96356eb4c0ee86a30f20b8d9bd97b1f0bd0da287822ae3cd8e15\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.423143 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"955de5c8a56d96356eb4c0ee86a30f20b8d9bd97b1f0bd0da287822ae3cd8e15\"}\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.423173 21156 kuberuntime_manager.go:594] killPodWithSyncResult failed: failed to \"KillPodSandbox\" for \"a39276703b0f3dfabe149ef43c57d6ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \\\"_\\\" network: cni config uninitialized\"\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.423188 21156 pod_workers.go:186] Error syncing pod a39276703b0f3dfabe149ef43c57d6ea (\"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\"), skipping: failed to \"KillPodSandbox\" for \"a39276703b0f3dfabe149ef43c57d6ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \\\"_\\\" network: cni config uninitialized\"\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.441238 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.590252 21156 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.590401 21156 kubelet.go:2147] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.608342 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.756723 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:09 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:09.456882 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:09 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:09.625918 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:09 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:09.769189 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:10 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:10.469734 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:10 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:10.634599 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:10 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:10.770502 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:11 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:11.477832 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:11 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:11.642599 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:11 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:11.795377 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:12 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:12.491314 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:12 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:12.646587 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:12 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:12.811152 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:13 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:13.134453 21156 event.go:209] Unable to write event: 'Patch https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/default/events/ip-172-31-50-118.us-west-2.compute.internal.153778adc38c11da: dial tcp 172.31.50.118:8443: getsockopt: connection refused' (may retry after sleeping)\nJun 12 17:24:13 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:13.518399 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:13 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:13.591684 21156 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d\nJun 12 17:24:13 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:13.591832 21156 kubelet.go:2147] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized\nJun 12 17:24:13 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:13.664113 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:13 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:13.822923 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:14 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:14.529451 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:14 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:14.680437 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:14 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:14.827781 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.123413 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.123448 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.123457 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.123464 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129201 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129228 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129239 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129248 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:15.129273 21156 pod_container_deletor.go:77] Container \"5b3db299b2ce21041a48f76ae1fe82062a5b7329e8e432238ec5c7a800d17929\" not found in pod's containers\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129292 21156 kubelet.go:1923] SyncLoop (PLEG): \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\", event: &pleg.PodLifecycleEvent{ID:\"470e9f0cfe88912707039722e46eb507\", Type:\"ContainerDied\", Data:\"43f8209d57035d2d49e8dc5509d44f2226d8bf7e5a88833bbb3fbf26e8e3a31e\"}\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129351 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129362 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129370 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129377 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129580 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129592 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129599 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129606 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.134393 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.134426 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.134441 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.134451 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:15.148287 21156 status_manager.go:461] Failed to get status for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\": Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/kube-system/pods/master-controllers-ip-172-31-50-118.us-west-2.compute.internal: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.434898 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" can be found. Need to start a new one\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:15.439209 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"813d633d914ca428846d73644fc80a374420c24fdb9c500a5521250a1f362510\" could not be found. Proceed without further sandbox information.\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.440308 21156 remote_runtime.go:115] StopPodSandbox \"813d633d914ca428846d73644fc80a374420c24fdb9c500a5521250a1f362510\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.440336 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"813d633d914ca428846d73644fc80a374420c24fdb9c500a5521250a1f362510\"}\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:15.441328 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"2c158903cbe96a73bb0ed620b5d96d5b55dc479f5bf4357bd1cc5da4cec4a529\" could not be found. Proceed without further sandbox information.\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.442531 21156 remote_runtime.go:115] StopPodSandbox \"2c158903cbe96a73bb0ed620b5d96d5b55dc479f5bf4357bd1cc5da4cec4a529\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.442548 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"2c158903cbe96a73bb0ed620b5d96d5b55dc479f5bf4357bd1cc5da4cec4a529\"}\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:15.443604 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"e3bc91fcdc460b61bb5afce442191a7bcd37506d4b8e86192ef555259cf3291a\" could not be found. Proceed without further sandbox information.\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.444682 21156 remote_runtime.go:115] StopPodSandbox \"e3bc91fcdc460b61bb5afce442191a7bcd37506d4b8e86192ef555259cf3291a\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.444703 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"e3bc91fcdc460b61bb5afce442191a7bcd37506d4b8e86192ef555259cf3291a\"}\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:15.445659 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"89d97350cff60b62851af9952eace3772f20da439f838d54086a3515a9759e48\" could not be found. Proceed without further sandbox information.\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.446726 21156 remote_runtime.go:115] StopPodSandbox \"89d97350cff60b62851af9952eace3772f20da439f838d54086a3515a9759e48\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.446748 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"89d97350cff60b62851af9952eace3772f20da439f838d54086a3515a9759e48\"}\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.446781 21156 kuberuntime_manager.go:594] killPodWithSyncResult failed: failed to \"KillPodSandbox\" for \"61801a9363130db3b3a59da18389cb26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \\\"_\\\" network: cni config uninitialized\"\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.446800 21156 pod_workers.go:186] Error syncing pod 61801a9363130db3b3a59da18389cb26 (\"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\"), skipping: failed to \"KillPodSandbox\" for \"61801a9363130db3b3a59da18389cb26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \\\"_\\\" network: cni config uninitialized\"\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.537099 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.700107 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused\nJun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.845300 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "stdout_lines": [ > "-- Logs begin at Tue 2018-06-12 06:31:10 UTC, end at Tue 2018-06-12 17:24:16 UTC. --", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.018979 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.018986 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.018992 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.025888 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.025918 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.025925 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.025932 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:47.195342 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.326271 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" can be found. Need to start a new one", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:47.339606 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:47.391307 21156 container.go:507] Failed to update stats for container \"/libcontainer_23480_systemd_test_default.slice\": failed to parse memory.kmem.limit_in_bytes - read /sys/fs/cgroup/memory/libcontainer_23480_systemd_test_default.slice/memory.kmem.limit_in_bytes: no such device, continuing to push stats", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:47.497940 21156 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:47.497979 21156 kuberuntime_sandbox.go:54] CreatePodSandbox for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:47.497993 21156 kuberuntime_manager.go:646] createPodSandbox for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:47.498043 21156 pod_workers.go:186] Error syncing pod a39276703b0f3dfabe149ef43c57d6ea (\"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\"), skipping: failed to \"CreatePodSandbox\" for \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" with CreatePodSandboxError: \"CreatePodSandbox for pod \\\"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\\\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \\\"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\\\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\\\\\\"No such device or address\\\\\\\"\\\"\"", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:47.529573 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.033385 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.033406 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.033412 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.033417 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.038157 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.038180 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.038191 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.038201 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:48.051092 21156 status_manager.go:461] Failed to get status for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\": Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/kube-system/pods/master-api-ip-172-31-50-118.us-west-2.compute.internal: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.196728 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.338191 21156 eviction_manager.go:246] eviction manager: failed to get get summary stats: failed to get node info: node \"ip-172-31-50-118.us-west-2.compute.internal\" not found", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.338634 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" can be found. Need to start a new one", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.355974 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.491896 21156 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.491927 21156 kuberuntime_sandbox.go:54] CreatePodSandbox for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.491937 21156 kuberuntime_manager.go:646] createPodSandbox for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.491988 21156 pod_workers.go:186] Error syncing pod 470e9f0cfe88912707039722e46eb507 (\"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\"), skipping: failed to \"CreatePodSandbox\" for \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" with CreatePodSandboxError: \"CreatePodSandbox for pod \\\"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\\\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \\\"master-api-ip-172-31-50-118.us-west-2.compute.internal\\\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\\\\\\"No such device or address\\\\\\\"\\\"\"", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.537455 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:48.584637 21156 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.584759 21156 kubelet.go:2147] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized", > "Jun 12 17:23:49 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:49.198291 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:49 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:49.357409 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:49 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:49.552761 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:50 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:50.199997 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:50 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:50.358762 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:50 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:50.564072 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:51 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:51.209140 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:51 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:51.374649 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:51 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:51.572262 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:52 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:52.215864 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:52 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:52.380286 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:52 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:52.589746 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:53 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:53.096650 21156 event.go:209] Unable to write event: 'Patch https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/default/events/ip-172-31-50-118.us-west-2.compute.internal.153778adc38c11da: dial tcp 172.31.50.118:8443: getsockopt: connection refused' (may retry after sleeping)", > "Jun 12 17:23:53 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:53.217611 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:53 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:53.400469 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:53 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:53.586030 21156 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d", > "Jun 12 17:23:53 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:53.586155 21156 kubelet.go:2147] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized", > "Jun 12 17:23:53 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:53.608455 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:54 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:54.232269 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:54 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:54.419283 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:54 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:54.616766 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:55 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:55.235563 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:55 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:55.429965 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:55 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:55.630545 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:56 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:56.249007 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:56 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:56.443394 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:56 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:56.636076 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:57.013034 21156 kubelet_node_status.go:1109] Failed to set some node status fields: failed to get node address from cloud provider: Timeout after 10s", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.015754 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.015779 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.015786 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.015793 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.015807 21156 kubelet_node_status.go:82] Attempting to register node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:57.018850 21156 kubelet_node_status.go:1109] Failed to set some node status fields: failed to get node address from cloud provider: Timeout after 10s", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:57.019276 21156 kubelet_node_status.go:1109] Failed to set some node status fields: failed to get node address from cloud provider: Timeout after 10s", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021220 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021242 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021252 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021262 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:57.021294 21156 pod_container_deletor.go:77] Container \"9ea9e50f8912f839cd0c60e36604346ee02db4dce6af954244ed6609ada1f62a\" not found in pod's containers", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021336 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021334 21156 kubelet.go:1923] SyncLoop (PLEG): \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\", event: &pleg.PodLifecycleEvent{ID:\"470e9f0cfe88912707039722e46eb507\", Type:\"ContainerDied\", Data:\"f0e8ad0ba290d63663c96d7e30ded575e4fedb551bffde581d588443135ff741\"}", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021354 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021362 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021369 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021392 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021405 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021412 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021419 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021511 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021522 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021530 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021537 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.023327 21156 kubelet_node_status.go:106] Unable to register node \"ip-172-31-50-118.us-west-2.compute.internal\" with API server: Post https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.026472 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.026494 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.026505 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.026512 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:57.042806 21156 status_manager.go:461] Failed to get status for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\": Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/kube-system/pods/master-etcd-ip-172-31-50-118.us-west-2.compute.internal: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.263265 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.321768 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" can be found. Need to start a new one", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.326817 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" can be found. Need to start a new one", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:57.443887 21156 container.go:507] Failed to update stats for container \"/libcontainer_23748_systemd_test_default.slice\": open /sys/fs/cgroup/cpu,cpuacct/libcontainer_23748_systemd_test_default.slice/cpuacct.stat: no such file or directory, continuing to push stats", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.458554 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.504808 21156 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.504868 21156 kuberuntime_sandbox.go:54] CreatePodSandbox for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.504882 21156 kuberuntime_manager.go:646] createPodSandbox for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.504949 21156 pod_workers.go:186] Error syncing pod 61801a9363130db3b3a59da18389cb26 (\"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\"), skipping: failed to \"CreatePodSandbox\" for \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" with CreatePodSandboxError: \"CreatePodSandbox for pod \\\"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\\\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \\\"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\\\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\\\\\\"No such device or address\\\\\\\"\\\"\"", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:57.519462 21156 container.go:507] Failed to update stats for container \"/libcontainer_23771_systemd_test_default.slice\": read /sys/fs/cgroup/cpu,cpuacct/libcontainer_23771_systemd_test_default.slice/cpuacct.usage: no such device, continuing to push stats", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.525378 21156 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.525414 21156 kuberuntime_sandbox.go:54] CreatePodSandbox for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.525429 21156 kuberuntime_manager.go:646] createPodSandbox for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.525487 21156 pod_workers.go:186] Error syncing pod a39276703b0f3dfabe149ef43c57d6ea (\"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\"), skipping: failed to \"CreatePodSandbox\" for \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" with CreatePodSandboxError: \"CreatePodSandbox for pod \\\"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\\\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \\\"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\\\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\\\\\\"No such device or address\\\\\\\"\\\"\"", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.653537 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.105603 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.105628 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.105635 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.105641 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110206 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110233 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110243 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110254 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:58.110277 21156 pod_container_deletor.go:77] Container \"f0e8ad0ba290d63663c96d7e30ded575e4fedb551bffde581d588443135ff741\" not found in pod's containers", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110295 21156 kubelet.go:1923] SyncLoop (PLEG): \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\", event: &pleg.PodLifecycleEvent{ID:\"a39276703b0f3dfabe149ef43c57d6ea\", Type:\"ContainerDied\", Data:\"27daa278953e954d9f601adf852c687c5e4c7e471f98f073508733054bec8334\"}", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110347 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110358 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110367 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110373 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110438 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110456 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110462 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110466 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.114721 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.114745 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.114754 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.114761 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:58.136018 21156 status_manager.go:461] Failed to get status for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\": Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/kube-system/pods/master-api-ip-172-31-50-118.us-west-2.compute.internal: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.287633 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.338327 21156 eviction_manager.go:246] eviction manager: failed to get get summary stats: failed to get node info: node \"ip-172-31-50-118.us-west-2.compute.internal\" not found", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.415182 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" can be found. Need to start a new one", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.471557 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:58.587438 21156 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.587570 21156 kubelet.go:2147] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.592801 21156 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.592874 21156 kuberuntime_sandbox.go:54] CreatePodSandbox for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.592889 21156 kuberuntime_manager.go:646] createPodSandbox for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.592944 21156 pod_workers.go:186] Error syncing pod 470e9f0cfe88912707039722e46eb507 (\"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\"), skipping: failed to \"CreatePodSandbox\" for \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" with CreatePodSandboxError: \"CreatePodSandbox for pod \\\"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\\\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \\\"master-api-ip-172-31-50-118.us-west-2.compute.internal\\\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\\\\\\"No such device or address\\\\\\\"\\\"\"", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.663583 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:59 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:59.301482 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:59 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:59.494494 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:59 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:59.686294 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:00 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:00.305572 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:00 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:00.514187 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:00 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:00.698598 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:01 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:01.313070 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:01 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:01.515505 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:01 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:01.719814 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:02 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:02.361241 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:02 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:02.516911 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:02 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:02.721142 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:03 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:03.118101 21156 event.go:209] Unable to write event: 'Patch https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/default/events/ip-172-31-50-118.us-west-2.compute.internal.153778adc38c11da: dial tcp 172.31.50.118:8443: getsockopt: connection refused' (may retry after sleeping)", > "Jun 12 17:24:03 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:03.377193 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:03 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:03.534349 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:03 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:03.588855 21156 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d", > "Jun 12 17:24:03 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:03.588985 21156 kubelet.go:2147] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized", > "Jun 12 17:24:03 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:03.731697 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.023473 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.023504 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.023513 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.023519 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.029093 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.029122 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.029131 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.029139 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:04.030494 21156 status_manager.go:461] Failed to get status for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\": Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/kube-system/pods/master-controllers-ip-172-31-50-118.us-west-2.compute.internal: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.329529 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" can be found. Need to start a new one", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.393999 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.485906 21156 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.485950 21156 kuberuntime_sandbox.go:54] CreatePodSandbox for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.485966 21156 kuberuntime_manager.go:646] createPodSandbox for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.486016 21156 pod_workers.go:186] Error syncing pod 61801a9363130db3b3a59da18389cb26 (\"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\"), skipping: failed to \"CreatePodSandbox\" for \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" with CreatePodSandboxError: \"CreatePodSandbox for pod \\\"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\\\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \\\"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\\\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\\\\\\"No such device or address\\\\\\\"\\\"\"", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.545993 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.547730 21156 certificate_manager.go:287] Rotating certificates", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.562264 21156 certificate_manager.go:299] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post https://ip-172-31-50-118.us-west-2.compute.internal:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.744585 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:05 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:05.406543 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:05 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:05.555678 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:05 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:05.746085 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:06 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:06.423411 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:06 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:06.571903 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:06 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:06.747470 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:07 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:07.439867 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:07 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:07.587813 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:07 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:07.749407 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.110481 21156 kubelet_node_status.go:1109] Failed to set some node status fields: failed to get node address from cloud provider: Timeout after 10s", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.110573 21156 kubelet_node_status.go:1109] Failed to set some node status fields: failed to get node address from cloud provider: Timeout after 10s", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113258 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113281 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113291 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113300 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113313 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.113322 21156 pod_container_deletor.go:77] Container \"27daa278953e954d9f601adf852c687c5e4c7e471f98f073508733054bec8334\" not found in pod's containers", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113334 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113345 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113342 21156 kubelet.go:1923] SyncLoop (PLEG): \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\", event: &pleg.PodLifecycleEvent{ID:\"61801a9363130db3b3a59da18389cb26\", Type:\"ContainerDied\", Data:\"5b3db299b2ce21041a48f76ae1fe82062a5b7329e8e432238ec5c7a800d17929\"}", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113360 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113393 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113402 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113408 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113413 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113484 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113493 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113500 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113506 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.116910 21156 status_manager.go:461] Failed to get status for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\": Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/kube-system/pods/master-etcd-ip-172-31-50-118.us-west-2.compute.internal: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.118498 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.118511 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.118517 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.118523 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.118533 21156 kubelet_node_status.go:82] Attempting to register node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.123268 21156 kubelet_node_status.go:106] Unable to register node \"ip-172-31-50-118.us-west-2.compute.internal\" with API server: Post https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.338472 21156 eviction_manager.go:246] eviction manager: failed to get get summary stats: failed to get node info: node \"ip-172-31-50-118.us-west-2.compute.internal\" not found", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.413736 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" can be found. Need to start a new one", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.417337 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"4afcd403f088ff280172ac1d15644655be4146b65d03f354dcc02f7defd0c67b\" could not be found. Proceed without further sandbox information.", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.418261 21156 remote_runtime.go:115] StopPodSandbox \"4afcd403f088ff280172ac1d15644655be4146b65d03f354dcc02f7defd0c67b\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.418301 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"4afcd403f088ff280172ac1d15644655be4146b65d03f354dcc02f7defd0c67b\"}", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.419029 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"0cf1ed3e41e68c350643b700625f9f5c48f584cb7a9a1f17bc73835f09d5dc03\" could not be found. Proceed without further sandbox information.", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.419900 21156 remote_runtime.go:115] StopPodSandbox \"0cf1ed3e41e68c350643b700625f9f5c48f584cb7a9a1f17bc73835f09d5dc03\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.419928 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"0cf1ed3e41e68c350643b700625f9f5c48f584cb7a9a1f17bc73835f09d5dc03\"}", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.420658 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"24c82cb9d9bc263bf71363ce0fa80b6910cfa02d5d98c46db4d69e51d10c3f59\" could not be found. Proceed without further sandbox information.", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.421547 21156 remote_runtime.go:115] StopPodSandbox \"24c82cb9d9bc263bf71363ce0fa80b6910cfa02d5d98c46db4d69e51d10c3f59\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.421574 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"24c82cb9d9bc263bf71363ce0fa80b6910cfa02d5d98c46db4d69e51d10c3f59\"}", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.422261 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"955de5c8a56d96356eb4c0ee86a30f20b8d9bd97b1f0bd0da287822ae3cd8e15\" could not be found. Proceed without further sandbox information.", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.423117 21156 remote_runtime.go:115] StopPodSandbox \"955de5c8a56d96356eb4c0ee86a30f20b8d9bd97b1f0bd0da287822ae3cd8e15\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.423143 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"955de5c8a56d96356eb4c0ee86a30f20b8d9bd97b1f0bd0da287822ae3cd8e15\"}", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.423173 21156 kuberuntime_manager.go:594] killPodWithSyncResult failed: failed to \"KillPodSandbox\" for \"a39276703b0f3dfabe149ef43c57d6ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \\\"_\\\" network: cni config uninitialized\"", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.423188 21156 pod_workers.go:186] Error syncing pod a39276703b0f3dfabe149ef43c57d6ea (\"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\"), skipping: failed to \"KillPodSandbox\" for \"a39276703b0f3dfabe149ef43c57d6ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \\\"_\\\" network: cni config uninitialized\"", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.441238 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.590252 21156 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.590401 21156 kubelet.go:2147] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.608342 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.756723 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:09 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:09.456882 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:09 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:09.625918 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:09 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:09.769189 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:10 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:10.469734 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:10 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:10.634599 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:10 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:10.770502 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:11 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:11.477832 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:11 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:11.642599 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:11 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:11.795377 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:12 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:12.491314 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:12 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:12.646587 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:12 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:12.811152 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:13 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:13.134453 21156 event.go:209] Unable to write event: 'Patch https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/default/events/ip-172-31-50-118.us-west-2.compute.internal.153778adc38c11da: dial tcp 172.31.50.118:8443: getsockopt: connection refused' (may retry after sleeping)", > "Jun 12 17:24:13 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:13.518399 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:13 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:13.591684 21156 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d", > "Jun 12 17:24:13 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:13.591832 21156 kubelet.go:2147] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized", > "Jun 12 17:24:13 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:13.664113 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:13 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:13.822923 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:14 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:14.529451 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:14 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:14.680437 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:14 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:14.827781 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.123413 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.123448 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.123457 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.123464 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129201 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129228 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129239 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129248 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:15.129273 21156 pod_container_deletor.go:77] Container \"5b3db299b2ce21041a48f76ae1fe82062a5b7329e8e432238ec5c7a800d17929\" not found in pod's containers", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129292 21156 kubelet.go:1923] SyncLoop (PLEG): \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\", event: &pleg.PodLifecycleEvent{ID:\"470e9f0cfe88912707039722e46eb507\", Type:\"ContainerDied\", Data:\"43f8209d57035d2d49e8dc5509d44f2226d8bf7e5a88833bbb3fbf26e8e3a31e\"}", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129351 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129362 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129370 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129377 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129580 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129592 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129599 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129606 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.134393 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.134426 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.134441 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.134451 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:15.148287 21156 status_manager.go:461] Failed to get status for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\": Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/kube-system/pods/master-controllers-ip-172-31-50-118.us-west-2.compute.internal: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.434898 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" can be found. Need to start a new one", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:15.439209 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"813d633d914ca428846d73644fc80a374420c24fdb9c500a5521250a1f362510\" could not be found. Proceed without further sandbox information.", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.440308 21156 remote_runtime.go:115] StopPodSandbox \"813d633d914ca428846d73644fc80a374420c24fdb9c500a5521250a1f362510\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.440336 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"813d633d914ca428846d73644fc80a374420c24fdb9c500a5521250a1f362510\"}", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:15.441328 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"2c158903cbe96a73bb0ed620b5d96d5b55dc479f5bf4357bd1cc5da4cec4a529\" could not be found. Proceed without further sandbox information.", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.442531 21156 remote_runtime.go:115] StopPodSandbox \"2c158903cbe96a73bb0ed620b5d96d5b55dc479f5bf4357bd1cc5da4cec4a529\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.442548 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"2c158903cbe96a73bb0ed620b5d96d5b55dc479f5bf4357bd1cc5da4cec4a529\"}", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:15.443604 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"e3bc91fcdc460b61bb5afce442191a7bcd37506d4b8e86192ef555259cf3291a\" could not be found. Proceed without further sandbox information.", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.444682 21156 remote_runtime.go:115] StopPodSandbox \"e3bc91fcdc460b61bb5afce442191a7bcd37506d4b8e86192ef555259cf3291a\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.444703 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"e3bc91fcdc460b61bb5afce442191a7bcd37506d4b8e86192ef555259cf3291a\"}", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:15.445659 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"89d97350cff60b62851af9952eace3772f20da439f838d54086a3515a9759e48\" could not be found. Proceed without further sandbox information.", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.446726 21156 remote_runtime.go:115] StopPodSandbox \"89d97350cff60b62851af9952eace3772f20da439f838d54086a3515a9759e48\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.446748 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"89d97350cff60b62851af9952eace3772f20da439f838d54086a3515a9759e48\"}", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.446781 21156 kuberuntime_manager.go:594] killPodWithSyncResult failed: failed to \"KillPodSandbox\" for \"61801a9363130db3b3a59da18389cb26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \\\"_\\\" network: cni config uninitialized\"", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.446800 21156 pod_workers.go:186] Error syncing pod 61801a9363130db3b3a59da18389cb26 (\"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\"), skipping: failed to \"KillPodSandbox\" for \"61801a9363130db3b3a59da18389cb26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \\\"_\\\" network: cni config uninitialized\"", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.537099 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.700107 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.845300 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused" > ] >} >2018-06-12 17:24:16,577 p=5860 u=root | TASK [openshift_control_plane : debug] ****************************************************************************************************************************************************************************************************** >2018-06-12 17:24:16,577 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:236 >2018-06-12 17:24:16,654 p=5860 u=root | ok: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com] => { > "msg": [ > "-- Logs begin at Tue 2018-06-12 06:31:10 UTC, end at Tue 2018-06-12 17:24:16 UTC. --", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.018979 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.018986 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.018992 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.025888 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.025918 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.025925 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.025932 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:47.195342 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:47.326271 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" can be found. Need to start a new one", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:47.339606 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:47.391307 21156 container.go:507] Failed to update stats for container \"/libcontainer_23480_systemd_test_default.slice\": failed to parse memory.kmem.limit_in_bytes - read /sys/fs/cgroup/memory/libcontainer_23480_systemd_test_default.slice/memory.kmem.limit_in_bytes: no such device, continuing to push stats", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:47.497940 21156 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:47.497979 21156 kuberuntime_sandbox.go:54] CreatePodSandbox for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:47.497993 21156 kuberuntime_manager.go:646] createPodSandbox for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:47.498043 21156 pod_workers.go:186] Error syncing pod a39276703b0f3dfabe149ef43c57d6ea (\"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\"), skipping: failed to \"CreatePodSandbox\" for \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" with CreatePodSandboxError: \"CreatePodSandbox for pod \\\"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\\\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \\\"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\\\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\\\\\\"No such device or address\\\\\\\"\\\"\"", > "Jun 12 17:23:47 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:47.529573 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.033385 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.033406 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.033412 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.033417 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.038157 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.038180 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.038191 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.038201 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:48.051092 21156 status_manager.go:461] Failed to get status for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\": Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/kube-system/pods/master-api-ip-172-31-50-118.us-west-2.compute.internal: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.196728 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.338191 21156 eviction_manager.go:246] eviction manager: failed to get get summary stats: failed to get node info: node \"ip-172-31-50-118.us-west-2.compute.internal\" not found", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:48.338634 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" can be found. Need to start a new one", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.355974 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.491896 21156 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.491927 21156 kuberuntime_sandbox.go:54] CreatePodSandbox for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.491937 21156 kuberuntime_manager.go:646] createPodSandbox for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.491988 21156 pod_workers.go:186] Error syncing pod 470e9f0cfe88912707039722e46eb507 (\"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\"), skipping: failed to \"CreatePodSandbox\" for \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" with CreatePodSandboxError: \"CreatePodSandbox for pod \\\"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\\\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \\\"master-api-ip-172-31-50-118.us-west-2.compute.internal\\\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\\\\\\"No such device or address\\\\\\\"\\\"\"", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.537455 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:48.584637 21156 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d", > "Jun 12 17:23:48 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:48.584759 21156 kubelet.go:2147] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized", > "Jun 12 17:23:49 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:49.198291 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:49 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:49.357409 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:49 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:49.552761 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:50 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:50.199997 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:50 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:50.358762 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:50 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:50.564072 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:51 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:51.209140 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:51 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:51.374649 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:51 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:51.572262 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:52 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:52.215864 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:52 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:52.380286 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:52 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:52.589746 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:53 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:53.096650 21156 event.go:209] Unable to write event: 'Patch https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/default/events/ip-172-31-50-118.us-west-2.compute.internal.153778adc38c11da: dial tcp 172.31.50.118:8443: getsockopt: connection refused' (may retry after sleeping)", > "Jun 12 17:23:53 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:53.217611 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:53 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:53.400469 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:53 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:53.586030 21156 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d", > "Jun 12 17:23:53 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:53.586155 21156 kubelet.go:2147] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized", > "Jun 12 17:23:53 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:53.608455 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:54 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:54.232269 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:54 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:54.419283 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:54 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:54.616766 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:55 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:55.235563 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:55 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:55.429965 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:55 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:55.630545 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:56 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:56.249007 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:56 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:56.443394 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:56 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:56.636076 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:57.013034 21156 kubelet_node_status.go:1109] Failed to set some node status fields: failed to get node address from cloud provider: Timeout after 10s", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.015754 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.015779 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.015786 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.015793 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.015807 21156 kubelet_node_status.go:82] Attempting to register node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:57.018850 21156 kubelet_node_status.go:1109] Failed to set some node status fields: failed to get node address from cloud provider: Timeout after 10s", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:57.019276 21156 kubelet_node_status.go:1109] Failed to set some node status fields: failed to get node address from cloud provider: Timeout after 10s", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021220 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021242 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021252 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021262 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:57.021294 21156 pod_container_deletor.go:77] Container \"9ea9e50f8912f839cd0c60e36604346ee02db4dce6af954244ed6609ada1f62a\" not found in pod's containers", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021336 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021334 21156 kubelet.go:1923] SyncLoop (PLEG): \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\", event: &pleg.PodLifecycleEvent{ID:\"470e9f0cfe88912707039722e46eb507\", Type:\"ContainerDied\", Data:\"f0e8ad0ba290d63663c96d7e30ded575e4fedb551bffde581d588443135ff741\"}", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021354 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021362 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021369 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021392 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021405 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021412 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021419 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021511 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021522 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021530 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.021537 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.023327 21156 kubelet_node_status.go:106] Unable to register node \"ip-172-31-50-118.us-west-2.compute.internal\" with API server: Post https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.026472 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.026494 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.026505 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.026512 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:57.042806 21156 status_manager.go:461] Failed to get status for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\": Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/kube-system/pods/master-etcd-ip-172-31-50-118.us-west-2.compute.internal: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.263265 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.321768 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" can be found. Need to start a new one", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:57.326817 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" can be found. Need to start a new one", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:57.443887 21156 container.go:507] Failed to update stats for container \"/libcontainer_23748_systemd_test_default.slice\": open /sys/fs/cgroup/cpu,cpuacct/libcontainer_23748_systemd_test_default.slice/cpuacct.stat: no such file or directory, continuing to push stats", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.458554 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.504808 21156 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.504868 21156 kuberuntime_sandbox.go:54] CreatePodSandbox for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.504882 21156 kuberuntime_manager.go:646] createPodSandbox for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.504949 21156 pod_workers.go:186] Error syncing pod 61801a9363130db3b3a59da18389cb26 (\"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\"), skipping: failed to \"CreatePodSandbox\" for \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" with CreatePodSandboxError: \"CreatePodSandbox for pod \\\"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\\\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \\\"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\\\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\\\\\\"No such device or address\\\\\\\"\\\"\"", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:57.519462 21156 container.go:507] Failed to update stats for container \"/libcontainer_23771_systemd_test_default.slice\": read /sys/fs/cgroup/cpu,cpuacct/libcontainer_23771_systemd_test_default.slice/cpuacct.usage: no such device, continuing to push stats", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.525378 21156 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.525414 21156 kuberuntime_sandbox.go:54] CreatePodSandbox for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.525429 21156 kuberuntime_manager.go:646] createPodSandbox for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.525487 21156 pod_workers.go:186] Error syncing pod a39276703b0f3dfabe149ef43c57d6ea (\"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\"), skipping: failed to \"CreatePodSandbox\" for \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" with CreatePodSandboxError: \"CreatePodSandbox for pod \\\"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\\\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \\\"master-etcd-ip-172-31-50-118.us-west-2.compute.internal\\\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\\\\\\"No such device or address\\\\\\\"\\\"\"", > "Jun 12 17:23:57 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:57.653537 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.105603 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.105628 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.105635 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.105641 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110206 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110233 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110243 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110254 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:58.110277 21156 pod_container_deletor.go:77] Container \"f0e8ad0ba290d63663c96d7e30ded575e4fedb551bffde581d588443135ff741\" not found in pod's containers", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110295 21156 kubelet.go:1923] SyncLoop (PLEG): \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\", event: &pleg.PodLifecycleEvent{ID:\"a39276703b0f3dfabe149ef43c57d6ea\", Type:\"ContainerDied\", Data:\"27daa278953e954d9f601adf852c687c5e4c7e471f98f073508733054bec8334\"}", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110347 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110358 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110367 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110373 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110438 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110456 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110462 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.110466 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.114721 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.114745 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.114754 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.114761 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:58.136018 21156 status_manager.go:461] Failed to get status for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\": Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/kube-system/pods/master-api-ip-172-31-50-118.us-west-2.compute.internal: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.287633 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.338327 21156 eviction_manager.go:246] eviction manager: failed to get get summary stats: failed to get node info: node \"ip-172-31-50-118.us-west-2.compute.internal\" not found", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:23:58.415182 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" can be found. Need to start a new one", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.471557 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:23:58.587438 21156 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.587570 21156 kubelet.go:2147] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.592801 21156 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.592874 21156 kuberuntime_sandbox.go:54] CreatePodSandbox for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.592889 21156 kuberuntime_manager.go:646] createPodSandbox for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-api-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.592944 21156 pod_workers.go:186] Error syncing pod 470e9f0cfe88912707039722e46eb507 (\"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\"), skipping: failed to \"CreatePodSandbox\" for \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\" with CreatePodSandboxError: \"CreatePodSandbox for pod \\\"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\\\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \\\"master-api-ip-172-31-50-118.us-west-2.compute.internal\\\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\\\\\\"No such device or address\\\\\\\"\\\"\"", > "Jun 12 17:23:58 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:58.663583 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:59 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:59.301482 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:59 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:59.494494 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:23:59 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:23:59.686294 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:00 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:00.305572 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:00 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:00.514187 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:00 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:00.698598 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:01 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:01.313070 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:01 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:01.515505 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:01 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:01.719814 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:02 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:02.361241 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:02 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:02.516911 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:02 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:02.721142 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:03 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:03.118101 21156 event.go:209] Unable to write event: 'Patch https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/default/events/ip-172-31-50-118.us-west-2.compute.internal.153778adc38c11da: dial tcp 172.31.50.118:8443: getsockopt: connection refused' (may retry after sleeping)", > "Jun 12 17:24:03 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:03.377193 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:03 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:03.534349 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:03 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:03.588855 21156 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d", > "Jun 12 17:24:03 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:03.588985 21156 kubelet.go:2147] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized", > "Jun 12 17:24:03 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:03.731697 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.023473 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.023504 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.023513 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.023519 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.029093 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.029122 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.029131 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.029139 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:04.030494 21156 status_manager.go:461] Failed to get status for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\": Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/kube-system/pods/master-controllers-ip-172-31-50-118.us-west-2.compute.internal: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.329529 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" can be found. Need to start a new one", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.393999 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.485906 21156 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.485950 21156 kuberuntime_sandbox.go:54] CreatePodSandbox for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.485966 21156 kuberuntime_manager.go:646] createPodSandbox for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:258: applying cgroup configuration for process caused \\\"No such device or address\\\"\"", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.486016 21156 pod_workers.go:186] Error syncing pod 61801a9363130db3b3a59da18389cb26 (\"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\"), skipping: failed to \"CreatePodSandbox\" for \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" with CreatePodSandboxError: \"CreatePodSandbox for pod \\\"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\\\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \\\"master-controllers-ip-172-31-50-118.us-west-2.compute.internal\\\": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:258: applying cgroup configuration for process caused \\\\\\\"No such device or address\\\\\\\"\\\"\"", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.545993 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:04.547730 21156 certificate_manager.go:287] Rotating certificates", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.562264 21156 certificate_manager.go:299] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post https://ip-172-31-50-118.us-west-2.compute.internal:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:04 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:04.744585 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:05 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:05.406543 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:05 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:05.555678 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:05 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:05.746085 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:06 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:06.423411 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:06 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:06.571903 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:06 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:06.747470 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:07 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:07.439867 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:07 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:07.587813 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:07 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:07.749407 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.110481 21156 kubelet_node_status.go:1109] Failed to set some node status fields: failed to get node address from cloud provider: Timeout after 10s", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.110573 21156 kubelet_node_status.go:1109] Failed to set some node status fields: failed to get node address from cloud provider: Timeout after 10s", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113258 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113281 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113291 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113300 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113313 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.113322 21156 pod_container_deletor.go:77] Container \"27daa278953e954d9f601adf852c687c5e4c7e471f98f073508733054bec8334\" not found in pod's containers", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113334 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113345 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113342 21156 kubelet.go:1923] SyncLoop (PLEG): \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\", event: &pleg.PodLifecycleEvent{ID:\"61801a9363130db3b3a59da18389cb26\", Type:\"ContainerDied\", Data:\"5b3db299b2ce21041a48f76ae1fe82062a5b7329e8e432238ec5c7a800d17929\"}", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113360 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113393 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113402 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113408 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113413 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113484 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113493 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113500 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.113506 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.116910 21156 status_manager.go:461] Failed to get status for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\": Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/kube-system/pods/master-etcd-ip-172-31-50-118.us-west-2.compute.internal: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.118498 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.118511 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.118517 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.118523 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.118533 21156 kubelet_node_status.go:82] Attempting to register node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.123268 21156 kubelet_node_status.go:106] Unable to register node \"ip-172-31-50-118.us-west-2.compute.internal\" with API server: Post https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.338472 21156 eviction_manager.go:246] eviction manager: failed to get get summary stats: failed to get node info: node \"ip-172-31-50-118.us-west-2.compute.internal\" not found", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:08.413736 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\" can be found. Need to start a new one", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.417337 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"4afcd403f088ff280172ac1d15644655be4146b65d03f354dcc02f7defd0c67b\" could not be found. Proceed without further sandbox information.", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.418261 21156 remote_runtime.go:115] StopPodSandbox \"4afcd403f088ff280172ac1d15644655be4146b65d03f354dcc02f7defd0c67b\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.418301 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"4afcd403f088ff280172ac1d15644655be4146b65d03f354dcc02f7defd0c67b\"}", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.419029 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"0cf1ed3e41e68c350643b700625f9f5c48f584cb7a9a1f17bc73835f09d5dc03\" could not be found. Proceed without further sandbox information.", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.419900 21156 remote_runtime.go:115] StopPodSandbox \"0cf1ed3e41e68c350643b700625f9f5c48f584cb7a9a1f17bc73835f09d5dc03\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.419928 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"0cf1ed3e41e68c350643b700625f9f5c48f584cb7a9a1f17bc73835f09d5dc03\"}", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.420658 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"24c82cb9d9bc263bf71363ce0fa80b6910cfa02d5d98c46db4d69e51d10c3f59\" could not be found. Proceed without further sandbox information.", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.421547 21156 remote_runtime.go:115] StopPodSandbox \"24c82cb9d9bc263bf71363ce0fa80b6910cfa02d5d98c46db4d69e51d10c3f59\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.421574 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"24c82cb9d9bc263bf71363ce0fa80b6910cfa02d5d98c46db4d69e51d10c3f59\"}", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.422261 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"955de5c8a56d96356eb4c0ee86a30f20b8d9bd97b1f0bd0da287822ae3cd8e15\" could not be found. Proceed without further sandbox information.", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.423117 21156 remote_runtime.go:115] StopPodSandbox \"955de5c8a56d96356eb4c0ee86a30f20b8d9bd97b1f0bd0da287822ae3cd8e15\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.423143 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"955de5c8a56d96356eb4c0ee86a30f20b8d9bd97b1f0bd0da287822ae3cd8e15\"}", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.423173 21156 kuberuntime_manager.go:594] killPodWithSyncResult failed: failed to \"KillPodSandbox\" for \"a39276703b0f3dfabe149ef43c57d6ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \\\"_\\\" network: cni config uninitialized\"", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.423188 21156 pod_workers.go:186] Error syncing pod a39276703b0f3dfabe149ef43c57d6ea (\"master-etcd-ip-172-31-50-118.us-west-2.compute.internal_kube-system(a39276703b0f3dfabe149ef43c57d6ea)\"), skipping: failed to \"KillPodSandbox\" for \"a39276703b0f3dfabe149ef43c57d6ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \\\"_\\\" network: cni config uninitialized\"", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.441238 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:08.590252 21156 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.590401 21156 kubelet.go:2147] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.608342 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:08 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:08.756723 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:09 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:09.456882 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:09 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:09.625918 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:09 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:09.769189 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:10 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:10.469734 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:10 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:10.634599 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:10 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:10.770502 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:11 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:11.477832 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:11 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:11.642599 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:11 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:11.795377 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:12 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:12.491314 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:12 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:12.646587 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:12 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:12.811152 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:13 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:13.134453 21156 event.go:209] Unable to write event: 'Patch https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/default/events/ip-172-31-50-118.us-west-2.compute.internal.153778adc38c11da: dial tcp 172.31.50.118:8443: getsockopt: connection refused' (may retry after sleeping)", > "Jun 12 17:24:13 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:13.518399 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:13 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:13.591684 21156 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d", > "Jun 12 17:24:13 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:13.591832 21156 kubelet.go:2147] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized", > "Jun 12 17:24:13 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:13.664113 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:13 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:13.822923 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:14 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:14.529451 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:14 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:14.680437 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:14 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:14.827781 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.123413 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.123448 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.123457 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.123464 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129201 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129228 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129239 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129248 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:15.129273 21156 pod_container_deletor.go:77] Container \"5b3db299b2ce21041a48f76ae1fe82062a5b7329e8e432238ec5c7a800d17929\" not found in pod's containers", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129292 21156 kubelet.go:1923] SyncLoop (PLEG): \"master-api-ip-172-31-50-118.us-west-2.compute.internal_kube-system(470e9f0cfe88912707039722e46eb507)\", event: &pleg.PodLifecycleEvent{ID:\"470e9f0cfe88912707039722e46eb507\", Type:\"ContainerDied\", Data:\"43f8209d57035d2d49e8dc5509d44f2226d8bf7e5a88833bbb3fbf26e8e3a31e\"}", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129351 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129362 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129370 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129377 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129580 21156 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129592 21156 kubelet_node_status.go:350] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m5.xlarge", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129599 21156 kubelet_node_status.go:361] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2b", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.129606 21156 kubelet_node_status.go:365] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.134393 21156 kubelet_node_status.go:448] Recording NodeHasSufficientDisk event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.134426 21156 kubelet_node_status.go:448] Recording NodeHasSufficientMemory event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.134441 21156 kubelet_node_status.go:448] Recording NodeHasNoDiskPressure event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.134451 21156 kubelet_node_status.go:448] Recording NodeHasSufficientPID event message for node ip-172-31-50-118.us-west-2.compute.internal", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:15.148287 21156 status_manager.go:461] Failed to get status for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\": Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/namespaces/kube-system/pods/master-controllers-ip-172-31-50-118.us-west-2.compute.internal: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: I0612 17:24:15.434898 21156 kuberuntime_manager.go:403] No ready sandbox for pod \"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\" can be found. Need to start a new one", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:15.439209 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"813d633d914ca428846d73644fc80a374420c24fdb9c500a5521250a1f362510\" could not be found. Proceed without further sandbox information.", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.440308 21156 remote_runtime.go:115] StopPodSandbox \"813d633d914ca428846d73644fc80a374420c24fdb9c500a5521250a1f362510\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.440336 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"813d633d914ca428846d73644fc80a374420c24fdb9c500a5521250a1f362510\"}", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:15.441328 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"2c158903cbe96a73bb0ed620b5d96d5b55dc479f5bf4357bd1cc5da4cec4a529\" could not be found. Proceed without further sandbox information.", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.442531 21156 remote_runtime.go:115] StopPodSandbox \"2c158903cbe96a73bb0ed620b5d96d5b55dc479f5bf4357bd1cc5da4cec4a529\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.442548 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"2c158903cbe96a73bb0ed620b5d96d5b55dc479f5bf4357bd1cc5da4cec4a529\"}", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:15.443604 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"e3bc91fcdc460b61bb5afce442191a7bcd37506d4b8e86192ef555259cf3291a\" could not be found. Proceed without further sandbox information.", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.444682 21156 remote_runtime.go:115] StopPodSandbox \"e3bc91fcdc460b61bb5afce442191a7bcd37506d4b8e86192ef555259cf3291a\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.444703 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"e3bc91fcdc460b61bb5afce442191a7bcd37506d4b8e86192ef555259cf3291a\"}", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: W0612 17:24:15.445659 21156 docker_sandbox.go:211] Both sandbox container and checkpoint for id \"89d97350cff60b62851af9952eace3772f20da439f838d54086a3515a9759e48\" could not be found. Proceed without further sandbox information.", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.446726 21156 remote_runtime.go:115] StopPodSandbox \"89d97350cff60b62851af9952eace3772f20da439f838d54086a3515a9759e48\" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"_\" network: cni config uninitialized", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.446748 21156 kuberuntime_manager.go:799] Failed to stop sandbox {\"docker\" \"89d97350cff60b62851af9952eace3772f20da439f838d54086a3515a9759e48\"}", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.446781 21156 kuberuntime_manager.go:594] killPodWithSyncResult failed: failed to \"KillPodSandbox\" for \"61801a9363130db3b3a59da18389cb26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \\\"_\\\" network: cni config uninitialized\"", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.446800 21156 pod_workers.go:186] Error syncing pod 61801a9363130db3b3a59da18389cb26 (\"master-controllers-ip-172-31-50-118.us-west-2.compute.internal_kube-system(61801a9363130db3b3a59da18389cb26)\"), skipping: failed to \"KillPodSandbox\" for \"61801a9363130db3b3a59da18389cb26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \\\"_\\\" network: cni config uninitialized\"", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.537099 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.700107 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused", > "Jun 12 17:24:15 ip-172-31-50-118.us-west-2.compute.internal atomic-openshift-node[21156]: E0612 17:24:15.845300 21156 reflector.go:205] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://ip-172-31-50-118.us-west-2.compute.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-50-118.us-west-2.compute.internal&limit=500&resourceVersion=0: dial tcp 172.31.50.118:8443: getsockopt: connection refused" > ] >} >2018-06-12 17:24:16,672 p=5860 u=root | TASK [openshift_control_plane : Report control plane errors] ******************************************************************************************************************************************************************************** >2018-06-12 17:24:16,672 p=5860 u=root | task path: /root/openshift-ansible/roles/openshift_control_plane/tasks/main.yml:238 >2018-06-12 17:24:16,706 p=5860 u=root | fatal: [ec2-54-186-168-249.us-west-2.compute.amazonaws.com]: FAILED! => { > "changed": false, > "failed": true, > "msg": "Control plane pods didn't come up" >} >2018-06-12 17:24:16,707 p=5860 u=root | NO MORE HOSTS LEFT ************************************************************************************************************************************************************************************************************************** >2018-06-12 17:24:16,708 p=5860 u=root | to retry, use: --limit @/root/openshift-ansible/playbooks/deploy_cluster.retry > >2018-06-12 17:24:16,708 p=5860 u=root | PLAY RECAP ********************************************************************************************************************************************************************************************************************************** >2018-06-12 17:24:16,709 p=5860 u=root | ec2-34-210-25-239.us-west-2.compute.amazonaws.com : ok=26 changed=1 unreachable=0 failed=0 >2018-06-12 17:24:16,709 p=5860 u=root | ec2-34-220-195-16.us-west-2.compute.amazonaws.com : ok=26 changed=1 unreachable=0 failed=0 >2018-06-12 17:24:16,709 p=5860 u=root | ec2-54-186-168-249.us-west-2.compute.amazonaws.com : ok=226 changed=29 unreachable=0 failed=1 >2018-06-12 17:24:16,709 p=5860 u=root | localhost : ok=15 changed=0 unreachable=0 failed=0 >2018-06-12 17:24:16,709 p=5860 u=root | INSTALLER STATUS **************************************************************************************************************************************************************************************************************************** >2018-06-12 17:24:16,712 p=5860 u=root | Initialization : Complete (0:00:14) >2018-06-12 17:24:16,712 p=5860 u=root | Health Check : Complete (0:00:01) >2018-06-12 17:24:16,712 p=5860 u=root | Node Preparation : Complete (0:00:00) >2018-06-12 17:24:16,712 p=5860 u=root | etcd Install : Complete (0:00:18) >2018-06-12 17:24:16,712 p=5860 u=root | Master Install : In Progress (0:17:02) >2018-06-12 17:24:16,712 p=5860 u=root | This phase can be restarted by running: playbooks/openshift-master/config.yml >2018-06-12 17:24:16,712 p=5860 u=root | Failure summary: > > > 1. Hosts: ec2-54-186-168-249.us-west-2.compute.amazonaws.com > Play: Configure masters > Task: Report control plane errors > Message: [0;31mControl plane pods didn't come up[0m
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 1590507
: 1450585 |
1450586