Description of problem: When installing containerized ha-master 3.9 cluster, prerequisites.yml playbook didn't configure necessary docker registry params on the lb host, then the installation failed for it couldn't pull haproxy image. Version-Release number of the following components: $ git describe openshift-ansible-3.9.0-0.10.0-54-g7d2dee3 ansible-2.4.2.0-1.el7ae.noarch How reproducible: Always Steps to Reproduce: 1. Prepare ansible inventory file, run the following playbooks #ansible-playbook -v openshift-ansible/playbooks/prerequisites.yml #ansible-playbook -v openshift-ansible/playbooks/deploy_cluster.yml Actual results: TASK [openshift_loadbalancer : Install haproxy] ******************************** Friday 22 December 2017 03:47:45 +0000 (0:00:00.021) 0:04:26.157 ******* skipping: [ec2-54-236-77-189.compute-1.amazonaws.com] => {"changed": false, "skip_reason": "Conditional result was False"} TASK [openshift_loadbalancer : Pull haproxy image] ***************************** Friday 22 December 2017 03:47:45 +0000 (0:00:00.026) 0:04:26.183 ******* fatal: [ec2-54-236-77-189.compute-1.amazonaws.com]: FAILED! => {"changed": true, "cmd": ["docker", "pull", "openshift3/ose-haproxy-router:v3.9.0-0.9.0"], "delta": "0:00:01.350727", "end": "2017-12-22 03:47:47.275027", "msg": "non-zero return code", "rc": 1, "start": "2017-12-22 03:47:45.924300", "stderr": "Error: image openshift3/ose-haproxy-router:v3.9.0-0.9.0 not found", "stderr_lines": ["Error: image openshift3/ose-haproxy-router:v3.9.0-0.9.0 not found"], "stdout": "Trying to pull repository registry.access.redhat.com/openshift3/ose-haproxy-router ... \nTrying to pull repository docker.io/openshift3/ose-haproxy-router ... \nPulling repository docker.io/openshift3/ose-haproxy-router", "stdout_lines": ["Trying to pull repository registry.access.redhat.com/openshift3/ose-haproxy-router ... ", "Trying to pull repository docker.io/openshift3/ose-haproxy-router ... ", "Pulling repository docker.io/openshift3/ose-haproxy-router"]} to retry, use: --limit @/home/slave5/workspace/Launch-Environment-Flexy/private-openshift-ansible/playbooks/deploy_cluster.retry On lb host: -bash-4.2# cat /etc/sysconfig/docker # /etc/sysconfig/docker # Modify these options if you want to change the way the docker daemon runs OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false' if [ -z "${DOCKER_CERT_PATH}" ]; then DOCKER_CERT_PATH=/etc/docker fi # Do not add registries in this file anymore. Use /etc/containers/registries.conf # from the atomic-registries package. # # docker-latest daemon can be used by starting the docker-latest unitfile. # To use docker-latest client, uncomment below lines #DOCKERBINARY=/usr/bin/docker-latest #DOCKERDBINARY=/usr/bin/dockerd-latest #DOCKER_CONTAINERD_BINARY=/usr/bin/docker-containerd-latest #DOCKER_CONTAINERD_SHIM_BINARY=/usr/bin/docker-containerd-shim-latest -bash-4.2# Expected results: Installer should configure registry params in /etc/sysconfig/docker first Additional info:
Just need to ensure that prerequisites run on lb hosts when they're containerized.
Gaoyun, Can you verify that you've set containerized=True on either the lb host or the lb group?
(In reply to Scott Dodson from comment #2) > Gaoyun, > > Can you verify that you've set containerized=True on either the lb host or > the lb group? Scott, I'm always using a global OSEv3:var containerized=true for all OSEv3 hosts when setting up a containerized cluster on RHEL. Tried such installation again with openshift-ansible-3.9.0-0.22.0.git.0.0e9d896.el7.noarch.rpm today, found this bug was already fixed by recent changes. TASK [container_runtime : Set registry params] was executed on lb host during playbooks/prerequisites.yml, haproxy image could be pulled successfully on lb host. So would you mind moving this bug to ON_QA then QE could verify it?
Fixed in: https://github.com/openshift/openshift-ansible/pull/6546
Move this bug to verified according to Comment 3. It's fixed in openshift-ansible-3.9.0-0.22.0.git.0.0e9d896.el7.noarch.rpm.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0489