Description of problem: Version-Release number of the following components: rpm -q openshift-ansible [fedora@ip-172-31-33-174 openshift-ansible]$ git log --oneline -1 78f029e37 (HEAD -> release-3.7, origin/release-3.7) Merge pull request #6216 from sosiouxme/20171121-registry-console-3.7 rpm -q ansible ansible-2.3.2.0-1.fc26.noarch ansible --version ansible 2.3.2.0 How reproducible: Steps to Reproduce: 1. ansible-playbook -i inv.txt openshift-ansible/playbooks/byo/openshift-glusterfs/config.yml 2. 3. Actual results: Please include the entire output from the last TASK line through the end of output if an error is generated Expected results: Additional info: Please attach logs from ansible-playbook with the -vvv flag TASK [openshift_storage_glusterfs : Copy initial glusterblock provisioner resource file] ************************************* failed: [ec2-34-216-71-225.us-west-2.compute.amazonaws.com] (item=glusterblock-template.yml) => {"failed": true, "item": "glusterblock-template.yml", "msg": "Unable to find 'v3.7/glusterblock-template.yml' in expected paths."} to retry, use: --limit @/home/fedora/openshift-ansible/playbooks/byo/openshift-glusterfs/config.retry PLAY RECAP ******************************************************************************************************************* ec2-34-209-241-179.us-west-2.compute.amazonaws.com : ok=47 changed=6 unreachable=0 failed=0 ec2-34-212-31-189.us-west-2.compute.amazonaws.com : ok=41 changed=2 unreachable=0 failed=0 ec2-34-216-71-225.us-west-2.compute.amazonaws.com : ok=106 changed=38 unreachable=0 failed=1 ec2-54-149-126-175.us-west-2.compute.amazonaws.com : ok=47 changed=6 unreachable=0 failed=0 ec2-54-186-184-225.us-west-2.compute.amazonaws.com : ok=47 changed=6 unreachable=0 failed=0 ec2-54-190-54-141.us-west-2.compute.amazonaws.com : ok=41 changed=2 unreachable=0 failed=0 ec2-54-202-9-93.us-west-2.compute.amazonaws.com : ok=41 changed=2 unreachable=0 failed=0 localhost : ok=12 changed=0 unreachable=0 failed=0 INSTALLER STATUS ************************************************************************************************************* Initialization : Complete GlusterFS Install : In Progress This phase can be restarted by running: playbooks/byo/openshift-glusterfs/config.yml
https://github.com/openshift/openshift-ansible/pull/6211
$ git log --oneline -1 9f3ef22c0 (HEAD -> release-3.7, origin/release-3.7) Merge pull request #6211 from openshift-cherrypick-robot/cherry-pick-6150-to-release-3.7 # yum list installed | grep atomic-openshift.x86_64 atomic-openshift.x86_64 3.7.9-1.git.0.7c71a2d.el7 # oc get pod -o wide -n glusterfs NAME READY STATUS RESTARTS AGE IP NODE glusterblock-storage-provisioner-dc-1-cs2ng 1/1 Running 0 27s 172.22.0.6 ip-172-31-60-69.us-west-2.compute.internal glusterfs-storage-7nxzd 1/1 Running 0 4m 172.31.31.252 ip-172-31-31-252.us-west-2.compute.internal glusterfs-storage-fp887 1/1 Running 0 4m 172.31.28.79 ip-172-31-28-79.us-west-2.compute.internal glusterfs-storage-v86ds 1/1 Running 0 4m 172.31.47.169 ip-172-31-47-169.us-west-2.compute.internal heketi-storage-1-74l66 1/1 Running 0 1m 172.20.0.3 ip-172-31-28-79.us-west-2.compute.internal # oc get pod -o yaml -n glusterfs | grep "image:" image: gluster/glusterblock-provisioner:latest image: docker.io/gluster/glusterblock-provisioner:latest image: registry.access.redhat.com/rhgs3/rhgs-server-rhel7:3.3.0-362 image: registry.access.redhat.com/rhgs3/rhgs-server-rhel7:3.3.0-362 image: registry.access.redhat.com/rhgs3/rhgs-server-rhel7:3.3.0-362 image: registry.access.redhat.com/rhgs3/rhgs-server-rhel7:3.3.0-362 image: registry.access.redhat.com/rhgs3/rhgs-server-rhel7:3.3.0-362 image: registry.access.redhat.com/rhgs3/rhgs-server-rhel7:3.3.0-362 image: registry.access.redhat.com/rhgs3/rhgs-volmanager-rhel7:3.3.0-362 image: registry.access.redhat.com/rhgs3/rhgs-volmanager-rhel7:3.3.0-362 It seems that the following vars are not taken into account. openshift_storage_glusterfs_block_image=registry.access.redhat.com/rhgs3/rhgs-gluster-block-prov-rhel7 openshift_storage_glusterfs_block_version=3.3.0-362
(In reply to Hongkai Liu from comment #4) Can we split that off into a separate bug?
https://github.com/openshift/openshift-ansible/pull/6273 fixes it though
(In reply to Scott Dodson from comment #6) > https://github.com/openshift/openshift-ansible/pull/6273 fixes it though That is quick. Let me verify the PR now. Thanks, Scott.
[fedora@ip-172-31-33-174 openshift-ansible]$ git log --oneline -1 35d202921 (HEAD -> aaa) GlusterFS: Remove extraneous line from glusterblock template $ ansible --version ansible 2.4.1.0 # oc get pod -o wide -n glusterfs NAME READY STATUS RESTARTS AGE IP NODE glusterblock-storage-provisioner-dc-1-x7b6d 1/1 Running 0 3m 172.21.2.5 ip-172-31-35-161.us-west-2.compute.internal glusterfs-storage-cdg9m 1/1 Running 0 7m 172.31.47.169 ip-172-31-47-169.us-west-2.compute.internal glusterfs-storage-j72qr 1/1 Running 0 7m 172.31.28.79 ip-172-31-28-79.us-west-2.compute.internal glusterfs-storage-mkrj6 1/1 Running 0 7m 172.31.31.252 ip-172-31-31-252.us-west-2.compute.internal heketi-storage-1-8kfvn 1/1 Running 0 4m 172.22.0.7 ip-172-31-60-69.us-west-2.compute.internal # oc get pod -o yaml -n glusterfs | grep "image:" image: registry.access.redhat.com/rhgs3/rhgs-gluster-block-prov-rhel7:3.3.0-362 image: registry.access.redhat.com/rhgs3/rhgs-gluster-block-prov-rhel7:3.3.0-362 image: registry.access.redhat.com/rhgs3/rhgs-server-rhel7:3.3.0-362 image: registry.access.redhat.com/rhgs3/rhgs-server-rhel7:3.3.0-362 image: registry.access.redhat.com/rhgs3/rhgs-server-rhel7:3.3.0-362 image: registry.access.redhat.com/rhgs3/rhgs-server-rhel7:3.3.0-362 image: registry.access.redhat.com/rhgs3/rhgs-server-rhel7:3.3.0-362 image: registry.access.redhat.com/rhgs3/rhgs-server-rhel7:3.3.0-362 image: registry.access.redhat.com/rhgs3/rhgs-volmanager-rhel7:3.3.0-362 image: registry.access.redhat.com/rhgs3/rhgs-volmanager-rhel7:3.3.0-362
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:3464