Bug 1630375

Summary: health check failed for docker_image_availability in disconnected environment
Product: OpenShift Container Platform Reporter: Sudarshan Chaudhari <suchaudh>
Component: Cluster Version OperatorAssignee: Scott Dodson <sdodson>
Status: CLOSED DUPLICATE QA Contact: Johnny Liu <jialiu>
Severity: high Docs Contact:
Priority: high    
Version: 3.10.0CC: aos-bugs, jokerman, mkim, mmccomas, msomasun, openshift-bugs-escalate, pkanthal, rhowe, sdodson, suchaudh, syangsao
Target Milestone: ---   
Target Release: 3.10.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-11-27 16:21:54 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sudarshan Chaudhari 2018-09-18 13:23:33 UTC
Description of problem:
A customer upgrading the env form OCP 3.9 to OCP 3.10 on the disconnected environment using the satellite server to provide the required images.

Inventory is properly configured with oreg_url

Error from ansible logs:

~~~
CHECK [memory_availability : master01.ocp-cluster.internal] *****************************************************************************************************************************************************************************************************************
fatal: [master01.ocp-cluster.internal]: FAILED! => {"changed": false, "checks": {"disk_availability": {}, "docker_image_availability": {"failed": true, "failures": [["OpenShiftCheckException", "One or more required container images are not available:\n    satellite.ocp-cluster.internal:5000/ocp-cluster-red_hat_container_catalog-ose-deployer:v3.10.14,\n    satellite.ocp-cluster.internal:5000/ocp-cluster-red_hat_container_catalog-ose-docker-registry:v3.10.14,\n    satellite.ocp-cluster.internal:5000/ocp-cluster-red_hat_container_catalog-ose-haproxy-router:v3.10.14,\n    satellite.ocp-cluster.internal:5000/ocp-cluster-red_hat_container_catalog-registry-console:v3.10\nChecked with: skopeo inspect [--tls-verify=false] [--creds=<user>:<pass>] docker://<registry>/<image>\nDefault registries searched: registry.access.redhat.com\nFailed connecting to: registry.access.redhat.com\n"]], "msg": "One or more required container images are not available:\n    satellite.ocp-cluster.internal:5000/ocp-cluster-red_hat_container_catalog-ose-deployer:v3.10.14,\n    satellite.ocp-cluster.internal:5000/ocp-cluster-red_hat_container_catalog-ose-docker-registry:v3.10.14,\n    satellite.ocp-cluster.internal:5000/ocp-cluster-red_hat_container_catalog-ose-haproxy-router:v3.10.14,\n    satellite.ocp-cluster.internal:5000/ocp-cluster-red_hat_container_catalog-registry-console:v3.10\nChecked with: skopeo inspect [--tls-verify=false] [--creds=<user>:<pass>] docker://<registry>/<image>\nDefault registries searched: registry.access.redhat.com\nFailed connecting to: registry.access.redhat.com\n"}, "docker_storage": {}, "memory_availability": {}, "package_availability": {"changed": false, "invocation": {"module_args": {"packages": ["PyYAML", "atomic-openshift", "atomic-openshift-clients", "atomic-openshift-master", "atomic-openshift-node", "atomic-openshift-sdn-ovs", "bash-completion", "bind", "ceph-common", "cockpit-bridge", "cockpit-docker", "cockpit-system", "cockpit-ws", "dnsmasq", "docker", "etcd", "firewalld", "flannel", "glusterfs-fuse", "httpd-tools", "iptables", "iptables-services", "iscsi-initiator-utils", "libselinux-python", "nfs-utils", "ntp", "openssl", "pyparted", "python-httplib2", "yum-utils"]}}}, "package_update": {"changed": false, "invocation": {"module_args": {"packages": []}}}, "package_version": {"changed": false, "invocation": {"module_args": {"package_list": [{"check_multi": true, "name": "atomic-openshift", "version": ""}, {"check_multi": true, "name": "atomic-openshift-master", "version": ""}, {"check_multi": true, "name": "atomic-openshift-node", "version": ""}], "package_mgr": "yum"}}}}, "msg": "One or more checks failed", "playbook_context": "pre-install"}
~~~

Actual results:
The playbook failed on this task and for this the variable: openshift_disable_check: "docker_image_availability" to inventory

Expected results:
The task should not fail as proper oreg_url  is specified


Description of problem:

Version-Release number of the following components:
rpm -q openshift-ansible
rpm -q ansible
ansible --version

How reproducible:

Steps to Reproduce:
1.
2.
3.

Actual results:
Please include the entire output from the last TASK line through the end of output if an error is generated

Expected results:

Additional info:
Please attach logs from ansible-playbook with the -vvv flag

Comment 1 Scott Dodson 2018-09-18 13:56:50 UTC
Do those images not exist at the path specified? If they exist at a different path what's that path?

What's the value for oreg_url? I assume

oreg_url=satellite.ocp-cluster.internal:5000/ocp-cluster-red_hat_container_catalog-ose-${component}:${version}

Comment 25 Ryan Howe 2018-11-27 16:11:27 UTC
With the issue being that the registry url is being duplicated, this commit should resolve the issue seen in this bug. 

https://github.com/openshift/openshift-ansible/commit/f4ad0aad0ba06e4f265c24b7ab278725b413b531#diff-f6d4c415edd5332159aa1c77eb72b757L43

With this commit skopeo command will not add {registry} to the images. All versions of openshift-ansible attached to this bug do not include the commit.  

This commit is available in the latest openshift-ansible rpms. 

Confirmed its present in these versions which are the latest at the time of writing this:  
openshift-ansible-3.11.43-1
openshift-ansible-3.10.73-1

Similar bug: 
https://bugzilla.redhat.com/show_bug.cgi?id=1613100

Comment 26 Scott Dodson 2018-11-27 16:21:54 UTC
Per comment 25 this is fixed in the latest 3.10 and 3.11 builds. Anyone using releases previous to that should simply disable the check as a workaround.

*** This bug has been marked as a duplicate of bug 1613100 ***