Red Hat Bugzilla – Bug 1499358
Installer does not support docker HTTP_PROXY
Last modified: 2018-04-05 05:30:13 EDT
Description of problem: When installing behind a firewall using Satellite as a source for rpms and using a proxy for docker to pull images, the installation always fails on the required images even though the following commands run successfully at the command line. docker pull openshift3/ose-deployer:v3.6.173.0.21 docker pull openshift3/ose-docker-registry:v3.6.173.0.21 docker pull openshift3/ose-haproxy-router:v3.6.173.0.21 docker pull openshift3/ose-pod:v3.6.173.0.21 docker pull registry.access.redhat.com/openshift3/registry-console using a custom installation Version-Release number of selected component (if applicable): 3.6.x How reproducible: always Steps to Reproduce: 1. configure custom installation host file 2. run byo playbook 3. Actual results: See below Expected results: Installation proceeds as expected. Additional info: Description of problem: Version-Release number of the following components: rpm -q openshift-ansible rpm -q ansible ansible --version How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Please include the entire output from the last TASK line through the end of output if an error is generated Expected results: Additional info: Please attach logs from ansible-playbook with the -vvv flag
HECK [memory_availability : ocpmaster.gsic-rh-poc.net] ****************************************************************************************************** fatal: [ocpmaster.gsic-rh-poc.net]: FAILED! => { "changed": false, "checks": { "disk_availability": {}, "docker_image_availability": { "changed": false, "failed": true, "msg": "One or more required Docker images are not available:\n openshift3/ose-deployer:v3.6.173.0.21,\n openshift3/ose-docker-registry:v3.6.173.0.21,\n openshift3/ose-haproxy-router:v3.6.173.0.21,\n openshift3/ose-pod:v3.6.173.0.21,\n registry.access.redhat.com/openshift3/registry-console\nConfigured registries: registry.access.redhat.com" }, "docker_storage": { "changed": false, "data_pct_used": 8.452515187776143, "data_threshold": 90.0, "data_total": "6.09 GB", "data_total_bytes": 6539087708.16, "data_used": "587.7 MB", "data_used_bytes": 616248115.2, "metadata_pct_used": 0.014288849325815464, "metadata_threshold": 90.0, "metadata_total": "67.11 MB", "metadata_total_bytes": 70369935.36, "metadata_used": "114.7 kB", "metadata_used_bytes": 117452.8, "msg": "Thinpool usage is within thresholds.", "vg_free": "0.70g", "vg_free_bytes": 751619276.8 }, "memory_availability": {}, "package_availability": { "changed": false, "invocation": { "module_args": { "packages": [ "PyYAML", "atomic-openshift", "atomic-openshift-clients", "atomic-openshift-master", "atomic-openshift-node", "atomic-openshift-sdn-ovs", "bash-completion", "bind", "ceph-common", "cockpit-bridge", "cockpit-docker", "cockpit-system", "cockpit-ws", "dnsmasq", "docker", "etcd", "firewalld", "flannel", "glusterfs-fuse", "httpd-tools", "iptables", "iptables-services", "iscsi-initiator-utils", "libselinux-python", "nfs-utils", "ntp", "openssl", "pyparted", "python-httplib2", "yum-utils" ] } } }, "package_version": { "changed": false, "invocation": { "module_args": { "package_list": [ { "check_multi": false, "name": "openvswitch", "version": [ "2.6", "2.7" ] }, { "check_multi": false, "name": "docker", "version": "1.12" }, { "check_multi": true, "name": "atomic-openshift", "version": "" }, { "check_multi": true, "name": "atomic-openshift-master", "version": "" }, { "check_multi": true, "name": "atomic-openshift-node", "version": "" } ] } } } }, "failed": true, "playbook_context": "install" } MSG: One or more checks failed CHECK [memory_availability : ocpnode1.gsic-rh-poc.net] ******************************************************************************************************* fatal: [ocpnode1.gsic-rh-poc.net]: FAILED! => { "changed": false, "checks": { "disk_availability": {}, "docker_image_availability": { "changed": false, "failed": true, "msg": "One or more required Docker images are not available:\n openshift3/ose-deployer:v3.6.173.0.21,\n openshift3/ose-docker-registry:v3.6.173.0.21,\n openshift3/ose-haproxy-router:v3.6.173.0.21,\n openshift3/ose-pod:v3.6.173.0.21,\n registry.access.redhat.com/openshift3/registry-console\nConfigured registries: registry.access.redhat.com" }, "docker_storage": { "changed": false, "data_pct_used": 1.1022314404604174, "data_threshold": 90.0, "data_total": "9.982 GB", "data_total_bytes": 10718090887.168, "data_used": "272.6 MB", "data_used_bytes": 285841817.6, "metadata_pct_used": 0.0004935098574206447, "metadata_threshold": 90.0, "metadata_total": "79.69 MB", "metadata_total_bytes": 83561021.44, "metadata_used": "73.73 kB", "metadata_used_bytes": 75499.52, "msg": "Thinpool usage is within thresholds.", "vg_free": "14.17g", "vg_free_bytes": 15214921646.08 }, "memory_availability": {}, "package_availability": { "changed": false, "invocation": { "module_args": { "packages": [ "PyYAML", "atomic-openshift", "atomic-openshift-node", "atomic-openshift-sdn-ovs", "bind", "ceph-common", "dnsmasq", "docker", "firewalld", "flannel", "glusterfs-fuse", "iptables", "iptables-services", "iscsi-initiator-utils", "libselinux-python", "nfs-utils", "ntp", "openssl", "pyparted", "python-httplib2", "yum-utils" ] } } }, "package_version": { "changed": false, "invocation": { "module_args": { "package_list": [ { "check_multi": false, "name": "openvswitch", "version": [ "2.6", "2.7" ] }, { "check_multi": false, "name": "docker", "version": "1.12" }, { "check_multi": true, "name": "atomic-openshift", "version": "" }, { "check_multi": true, "name": "atomic-openshift-master", "version": "" }, { "check_multi": true, "name": "atomic-openshift-node", "version": "" } ] } } } }, "failed": true, "playbook_context": "install" } MSG: One or more checks failed CHECK [memory_availability : ocpnode2.gsic-rh-poc.net] ******************************************************************************************************* fatal: [ocpnode2.gsic-rh-poc.net]: FAILED! => { "changed": false, "checks": { "disk_availability": {}, "docker_image_availability": { "changed": false, "failed": true, "msg": "One or more required Docker images are not available:\n openshift3/ose-deployer:v3.6.173.0.21,\n openshift3/ose-docker-registry:v3.6.173.0.21,\n openshift3/ose-haproxy-router:v3.6.173.0.21,\n openshift3/ose-pod:v3.6.173.0.21,\n registry.access.redhat.com/openshift3/registry-console\nConfigured registries: registry.access.redhat.com" }, "docker_storage": { "changed": false, "data_pct_used": 1.1022314404604174, "data_threshold": 90.0, "data_total": "9.982 GB", "data_total_bytes": 10718090887.168, "data_used": "272.6 MB", "data_used_bytes": 285841817.6, "metadata_pct_used": 0.0004935098574206447, "metadata_threshold": 90.0, "metadata_total": "79.69 MB", "metadata_total_bytes": 83561021.44, "metadata_used": "73.73 kB", "metadata_used_bytes": 75499.52, "msg": "Thinpool usage is within thresholds.", "vg_free": "14.17g", "vg_free_bytes": 15214921646.08 }, "memory_availability": {}, "package_availability": { "changed": false, "invocation": { "module_args": { "packages": [ "PyYAML", "atomic-openshift", "atomic-openshift-node", "atomic-openshift-sdn-ovs", "bind", "ceph-common", "dnsmasq", "docker", "firewalld", "flannel", "glusterfs-fuse", "iptables", "iptables-services", "iscsi-initiator-utils", "libselinux-python", "nfs-utils", "ntp", "openssl", "pyparted", "python-httplib2", "yum-utils" ] } } }, "package_version": { "changed": false, "invocation": { "module_args": { "package_list": [ { "check_multi": false, "name": "openvswitch", "version": [ "2.6", "2.7" ] }, { "check_multi": false, "name": "docker", "version": "1.12" }, { "check_multi": true, "name": "atomic-openshift", "version": "" }, { "check_multi": true, "name": "atomic-openshift-master", "version": "" }, { "check_multi": true, "name": "atomic-openshift-node", "version": "" } ] } } } }, "failed": true, "playbook_context": "install" } MSG: One or more checks failed to retry, use: --limit @/usr/share/ansible/openshift-ansible/playbooks/byo/config.retry PLAY RECAP *************************************************************************************************************************************************** localhost : ok=8 changed=0 unreachable=0 failed=0 ocpmaster.gsic-rh-poc.net : ok=82 changed=2 unreachable=0 failed=1 ocpnode1.gsic-rh-poc.net : ok=80 changed=2 unreachable=0 failed=1 ocpnode2.gsic-rh-poc.net : ok=80 changed=2 unreachable=0 failed=1 Failure summary: 1. Host: ocpmaster.gsic-rh-poc.net Play: Verify Requirements Task: openshift_health_check Message: One or more checks failed Details: check "docker_image_availability": One or more required Docker images are not available: openshift3/ose-deployer:v3.6.173.0.21, openshift3/ose-docker-registry:v3.6.173.0.21, openshift3/ose-haproxy-router:v3.6.173.0.21, openshift3/ose-pod:v3.6.173.0.21, registry.access.redhat.com/openshift3/registry-console Configured registries: registry.access.redhat.com 2. Host: ocpnode1.gsic-rh-poc.net Play: Verify Requirements Task: openshift_health_check Message: One or more checks failed Details: check "docker_image_availability": One or more required Docker images are not available: openshift3/ose-deployer:v3.6.173.0.21, openshift3/ose-docker-registry:v3.6.173.0.21, openshift3/ose-haproxy-router:v3.6.173.0.21, openshift3/ose-pod:v3.6.173.0.21, registry.access.redhat.com/openshift3/registry-console Configured registries: registry.access.redhat.com 3. Host: ocpnode2.gsic-rh-poc.net Play: Verify Requirements Task: openshift_health_check Message: One or more checks failed Details: check "docker_image_availability": One or more required Docker images are not available: openshift3/ose-deployer:v3.6.173.0.21, openshift3/ose-docker-registry:v3.6.173.0.21, openshift3/ose-haproxy-router:v3.6.173.0.21, openshift3/ose-pod:v3.6.173.0.21, registry.access.redhat.com/openshift3/registry-console Configured registries: registry.access.redhat.com The execution of "/usr/share/ansible/openshift-ansible/playbooks/byo/config.yml" includes checks designed to fail early if the requirements of the playbook are not met. One or more of these checks failed. To disregard these results, you may choose to disable failing checks by setting an Ansible variable: openshift_disable_check=docker_image_availability Failing check names are shown in the failure details above. Some checks may be configurable by variables if your requirements are different from the defaults; consult check documentation. Variables can be set in the inventory or passed on the command line using the -e flag to ansible-playbook.
Work around to download the images locally prior to run was not successful. Same result.
This check runs in two parts. First, it checks on each host whether docker already has the images in its local index. There is a fix coming for this part with https://github.com/openshift/openshift-ansible/pull/5393/commits/a9c493389f2007048c25744da0b6e4314afa3f39 (not yet released though) which may be relevant here. Second, for any not locally present, it uses skopeo on the target host to determine if it will be able to pull the images when needed (docker provides no way to check this other than actually pulling the image, which we don't want to wait for here). The skopeo part of the check is not yet aware of proxies, which is a known limitation to be fixed. Improvements are pending release for handling registries that can't actually be reached, for disconnected scenarios. However I'm curious why the first part failed when you pre-pulled the images. I would have expected at least the registry.access.redhat.com/openshift3/registry-console image to be found. The other unqualified images are likely pulled as e.g. registry.access.redhat.com/openshift3/ose-pod:v3.6.173.0.21 and failing a strict match to openshift3/ose-pod:v3.6.173.0.21 (which the above fix addresses). The -vvv output starting from when the checks began running would show module invocations to indicate what it's looking for. If you have that output on hand, it would help (though of course I can try to reproduce this scenario too).
Hi Luke, I am sorry, I don't have the -vvv output. This was in a POC environment that is now up and running and I cannot create a reproducer at this time. Looking forward to the update. Cheers, Paul
Hi team, a customer faced the same problem during the installation. They were running the ansible playbook installer and got an error during docker_image_availability. We have checked the configuration and have set the proxy settings on Docker and on OpenShift. Although none of the proxy variables were used by skopeo as described by Luke in Comment #3. As a workaround, the customer had download manually the images needed and then was able to proceed. I am linking the SFDC case to this BZ.
Created attachment 1339678 [details] /etc/ansible/hosts
Created attachment 1339679 [details] output error from installation
*** Bug 1528401 has been marked as a duplicate of this bug. ***
https://github.com/openshift/openshift-ansible/pull/6716 is intended to enable skopeo to use proxy settings for this check.
That's merged for 3.9 and https://github.com/openshift/openshift-ansible/pull/6838 merged for 3.7 (do we need a separate bug to track that into an errata? Or should this bug just get target release 3.7.z?)
*** Bug 1491570 has been marked as a duplicate of this bug. ***
There is an error in the attached kb article: " timeout 10 skopeo inspect --tls-verify=false docker://registry.access.redhat.com/openshift3/registry-console timeout 10 skopeo inspect --tls-verify=false docker://openshift3/ose-deployer:v3.6.173.0.5 timeout 10 skopeo inspect --tls-verify=false docker://registry.access.redhat.com/openshift3/ose-docker-registry:v3.6.173.0.5 timeout 10 skopeo inspect --tls-verify=false docker://registry.access.redhat.com/openshift3/ose-haproxy-router:v3.6.173.0.5 timeout 10 skopeo inspect --tls-verify=false docker://registry.access.redhat.com/openshift3/ose-pod:v3.6.173.0.5 timeout 10 skopeo inspect --tls-verify=false docker://registry.access.redhat.com/openshift3/registry-console " missing the registry.access.redhat.com url (2nd line) and registry-console is checked twice Greetings Klaas
Thanks, updated.
Verified in openshift-ansible-3.7.31-1.git.0.08008d0.el7.noarch.rpm
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0636