Description of problem: - When we run installation with EFS provisioner, ansible playbook completes, but the image could not be pulled well. Version-Release number of the following components: # rpm -qa | grep -ie openshift -ie ansible openshift-ansible-docs-3.7.23-1.git.0.bc406aa.el7.noarch openshift-ansible-callback-plugins-3.7.23-1.git.0.bc406aa.el7.noarch openshift-ansible-filter-plugins-3.7.23-1.git.0.bc406aa.el7.noarch openshift-ansible-lookup-plugins-3.7.23-1.git.0.bc406aa.el7.noarch openshift-ansible-playbooks-3.7.23-1.git.0.bc406aa.el7.noarch openshift-ansible-3.7.23-1.git.0.bc406aa.el7.noarch atomic-openshift-excluder-3.7.23-1.git.5.83efd71.el7.noarch ansible-2.4.2.0-2.el7.noarch openshift-ansible-roles-3.7.23-1.git.0.bc406aa.el7.noarch atomic-openshift-utils-3.7.23-1.git.0.bc406aa.el7.noarch atomic-openshift-docker-excluder-3.7.23-1.git.5.83efd71.el7.noarch How reproducible: 100% Steps to Reproduce: 1. Run installation with following settings: #EFS Provisioner openshift_provisioners_efs=True openshift_provisioners_efs_fsid=xxx openshift_provisioners_efs_region=xxx openshift_provisioners_efs_aws_access_key_id=xxx openshift_provisioners_efs_aws_secret_access_key=xxx Actual results: - ansible playbook complets, but if efs-provisioner fails with following logs: 1h 1h 1 provisioners-efs-1-qzn82 Pod Normal SuccessfulMountVolume kubelet, xx.xx.xx.xx MountVolume.SetUp succeeded for volume "provisioners-efs" 1h 1h 2 provisioners-efs-1-qzn82 Pod spec.containers{efs-provisioner} Normal Pulling kubelet, xx.xx.xx.xx pulling image "registry.access.redhat.com/openshift3/efs-provisioner:v3.7.23" 1h 1h 2 provisioners-efs-1-qzn82 Pod spec.containers{efs-provisioner} Warning Failed kubelet, xx.xx.xx.xx Failed to pull image "registry.access.redhat.com/openshift3/efs-provisioner:v3.7.23": rpc error: code = 2 desc = error parsing HTTP 404 response body: invalid character 'F' looking for beginning of value: "File not found.\"" Expected results: - Successfully pull image openshift3/efs-provisioner:v3.7.23. Additional info: - Workaround is setting "openshift_provisioners_image_version=latest". - As you can see https://access.redhat.com/containers/?tab=tags#/registry.access.redhat.com/openshift3/efs-provisioner, there are no v3.7 or v3.7.23 tags. - Proposal patch is https://github.com/openshift/openshift-ansible/pull/7932
Thanks for the PR!
Verify this bug with latest release-3.7 branch(openshift-ansible-3.7.44-1-15-gd91bc1c). With the following openshift-ansible options: openshift_provisioners_efs=True openshift_provisioners_efs_fsid=xx openshift_provisioners_efs_region=xx openshift_provisioners_efs_aws_access_key_id=xx openshift_provisioners_efs_aws_secret_access_key=xx openshift_provisioners_efs_path=/ Deploy efs-provisioner ansible-playbook -i host/host openshift-ansible/playbooks/byo/openshift-cluster/openshift-provisioners.yml After playbook finished, check the pod status [root@ip-172-18-5-119 ~]# oc get pod -n openshift-infra NAME READY STATUS RESTARTS AGE provisioners-efs-1-q6wp5 1/1 Running 0 4m [root@ip-172-18-5-119 ~]# oc describe pod provisioners-efs-1-q6wp5 -n openshift-infra |grep Image: Image: registry.access.redhat.com/openshift3/efs-provisioner:latest Will move this bug to verified once the PR merged into 3.7 openshift-ansible rpm package.
Verify this bug with openshift-ansible-3.7.46-1.git.0.37f607e.el7.noarch, it's using latest tag instead of openshift_image-tag for efs-provisioner deployment now.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:1576