Bug 1570479 - docker_image_availability should be updated due to image name change.
Summary: docker_image_availability should be updated due to image name change.
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.10.0
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: 3.10.0
Assignee: Luke Meyer
QA Contact: Johnny Liu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-23 03:32 UTC by Johnny Liu
Modified: 2018-07-23 13:09 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2018-05-30 13:44:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
inventory file (5.07 KB, text/plain)
2018-04-23 03:32 UTC, Johnny Liu
no flags Details

Description Johnny Liu 2018-04-23 03:32:11 UTC
Created attachment 1425495 [details]
inventory file

Description of problem:
Since 3.10, master would be running as static pods, openvswitch would be running as a deamonset pod.
"ose" image is renamed to "ose-control-plane"; "node" image is renamed to "ose-node".

docker_image_availability should be updated accordingly.

Version-Release number of the following components:
openshift-ansible-3.10.0-0.27.0.git.0.abed3b7.el7.noarch

How reproducible:
Always

Steps to Reproduce:
1. trigger an installation with openshift_image_tag and openshfit_release defined.
openshift_image_tag=v3.10.0-0.27.0
2.
3.

Actual results:
Failure summary:


  1. Hosts:    qe-smoke310-master-registry-router-1.0422-jp8.qe.rhcloud.com
     Play:     OpenShift Health Checks
     Task:     Run health checks (install) - EL
     Message:  One or more checks failed
     Details:  check "docker_image_availability":
               One or more required container images are not available:
                   openshift3/node:v3.10.0-0.27.0,
                   openshift3/openvswitch:v3.10.0-0.27.0,
                   openshift3/ose:v3.10.0-0.27.0
               Checked with: skopeo inspect [--tls-verify=false] [--creds=<user>:<pass>] docker://<registry>/<image>
               Default registries searched: registry.reg-aws.openshift.com:443, registry.access.redhat.com
               Blocked registries: registry.hacker.com
               

The execution of "/home/slave2/workspace/Launch Environment Flexy/private-openshift-ansible/playbooks/deploy_cluster.yml" includes checks designed to fail early if the requirements of the playbook are not met. One or more of these checks failed. To disregard these results,explicitly disable checks by setting an Ansible variable:
   openshift_disable_check=docker_image_availability
Failing check names are shown in the failure details above. Some checks may be configurable by variables if your requirements are different from the defaults; consult check documentation.

Expected results:
docker_image_availability should be passed.

Additional info:
Please attach logs from ansible-playbook with the -vvv flag

Comment 1 Scott Dodson 2018-04-26 13:54:19 UTC
We need to only be checking 'registry.access.redhat.com/openshift3/ose-control-plane:v3.10.0-0.27.0' and 'registry.access.redhat.com/openshift3/ose-node:v3.10.0-0.27.0'

It's also been observed that due to the changes it installation flow the health checks may not run as early in the process as they used to.

Comment 2 Luke Meyer 2018-05-02 15:05:14 UTC
Obviously I have not been keeping up. I assume the changes are similar for origin -- openshift/origin-control-plane and openshift/origin-node. Is openvswitch no longer required on nodes or just renamed?

Comment 5 openshift-github-bot 2018-05-03 10:26:48 UTC
Commit pushed to master at https://github.com/openshift/openshift-ansible

https://github.com/openshift/openshift-ansible/commit/3b1b4c11d14df9493660a1f44d510dbf1ed95f40
docker_image_availability: bz 1570479

fixes bug 1570479
https://bugzilla.redhat.com/show_bug.cgi?id=1570479

With OCP 3.10, openvswitch is no longer separate from node and node
container name has switched to {origin|ose}-node.

Comment 7 Johnny Liu 2018-05-17 03:46:29 UTC
Verified this bug with openshift-ansible-3.10.0-0.47.0.git.0.c018c8f.el7.noarch, and PASS.

Setting the following options:
openshift_image_tag=v3.10.0-0.47.0
openshift_release=v3.10


Health check is passed.
<--snip-->
TASK [Run health checks (install) - EL] ****************************************
Wednesday 16 May 2018  23:27:25 -0400 (0:00:00.722)       0:00:16.537 ********* 

CHECK [docker_storage : host-8-246-43.host.centralci.eng.rdu2.redhat.com] ******

CHECK [docker_storage : host-8-243-83.host.centralci.eng.rdu2.redhat.com] ******

CHECK [disk_availability : host-8-243-83.host.centralci.eng.rdu2.redhat.com] ***

CHECK [package_availability : host-8-243-83.host.centralci.eng.rdu2.redhat.com] ***

CHECK [package_version : host-8-243-83.host.centralci.eng.rdu2.redhat.com] *****

CHECK [docker_image_availability : host-8-243-83.host.centralci.eng.rdu2.redhat.com] ***

CHECK [disk_availability : host-8-246-43.host.centralci.eng.rdu2.redhat.com] ***

CHECK [package_availability : host-8-246-43.host.centralci.eng.rdu2.redhat.com] ***

CHECK [package_version : host-8-246-43.host.centralci.eng.rdu2.redhat.com] *****

CHECK [docker_image_availability : host-8-246-43.host.centralci.eng.rdu2.redhat.com] ***

CHECK [memory_availability : host-8-243-83.host.centralci.eng.rdu2.redhat.com] ***
changed: [host-8-243-83.host.centralci.eng.rdu2.redhat.com] => {"changed": true, "checks": {"disk_availability": {}, "docker_image_availability": {"changed": true}, "docker_storage": {}, "memory_availability": {}, "package_availability": {"skipped": true, "skipped_reason": "Not active for this host"}, "package_version": {"skipped": true, "skipped_reason": "Not active for this host"}}, "failed": false, "playbook_context": "install"}

CHECK [memory_availability : host-8-246-43.host.centralci.eng.rdu2.redhat.com] ***
changed: [host-8-246-43.host.centralci.eng.rdu2.redhat.com] => {"changed": true, "checks": {"disk_availability": {}, "docker_image_availability": {"changed": true}, "docker_storage": {}, "memory_availability": {}, "package_availability": {"skipped": true, "skipped_reason": "Not active for this host"}, "package_version": {"skipped": true, "skipped_reason": "Not active for this host"}}, "failed": false, "playbook_context": "install"}
<--snip-->


set openshift_image_tag to a non-existing tag, failed as expected behavior, only for checking image name, the image name is correct.
  1. Hosts:    host-8-241-103.host.centralci.eng.rdu2.redhat.com
     Play:     OpenShift Health Checks
     Task:     Run health checks (install) - EL
     Message:  One or more checks failed
     Details:  check "docker_image_availability":
               One or more required container images are not available:
                   openshift3/ose-node:v3.10.0-0.98.0,
                   registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.10.0-0.98.0,
                   registry.reg-aws.openshift.com:443/openshift3/ose-docker-registry:v3.10.0-0.98.0,
                   registry.reg-aws.openshift.com:443/openshift3/ose-haproxy-router:v3.10.0-0.98.0,
                   registry.reg-aws.openshift.com:443/openshift3/ose-pod:v3.10.0-0.98.0
               Checked with: skopeo inspect [--tls-verify=false] [--creds=<user>:<pass>] docker://<registry>/<image>
               Default registries searched: registry.reg-aws.openshift.com:443, registry.access.redhat.com
               Blocked registries: registry.hacker.com
               

  2. Hosts:    host-8-248-250.host.centralci.eng.rdu2.redhat.com
     Play:     OpenShift Health Checks
     Task:     Run health checks (install) - EL
     Message:  One or more checks failed
     Details:  check "docker_image_availability":
               One or more required container images are not available:
                   openshift3/ose-control-plane:v3.10.0-0.98.0,
                   openshift3/ose-node:v3.10.0-0.98.0,
                   registry.reg-aws.openshift.com:443/openshift3/ose-deployer:v3.10.0-0.98.0,
                   registry.reg-aws.openshift.com:443/openshift3/ose-docker-registry:v3.10.0-0.98.0,
                   registry.reg-aws.openshift.com:443/openshift3/ose-haproxy-router:v3.10.0-0.98.0,
                   registry.reg-aws.openshift.com:443/openshift3/ose-pod:v3.10.0-0.98.0
               Checked with: skopeo inspect [--tls-verify=false] [--creds=<user>:<pass>] docker://<registry>/<image>
               Default registries searched: registry.reg-aws.openshift.com:443, registry.access.redhat.com
               Blocked registries: registry.hacker.com

Comment 8 Luke Meyer 2018-05-30 13:44:14 UTC
Bug was never released, no need to track further.


Note You need to log in before you can comment on or make changes to this bug.