Bug 1581622 - [CNS][3.10] Installation failed due to GlusterFS pods try to pull image from docker.io
Summary: [CNS][3.10] Installation failed due to GlusterFS pods try to pull image from ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.10.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 3.10.0
Assignee: Jose A. Rivera
QA Contact: Wenkai Shi
URL:
Whiteboard:
Depends On: 1583500
Blocks: 1583148
TreeView+ depends on / blocked
 
Reported: 2018-05-23 09:00 UTC by Wenkai Shi
Modified: 2018-07-30 19:16 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
: 1583148 (view as bug list)
Environment:
Last Closed: 2018-07-30 19:16:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:1816 0 None None None 2018-07-30 19:16:38 UTC

Description Wenkai Shi 2018-05-23 09:00:47 UTC
Description of problem:
Installation failed due to GlusterFS pods try to pull rhgs3/rhgs-server-rhel7:latest from docker.io.

Version-Release number of the following components:
atomic-openshift-3.10.0-0.50.0.git.0.db6dfd6.el7
openshift-ansible-3.10.0-0.50.0.git.0.bd68ade.el7
ansible-2.4.4.0-1.el7ae.noarch

How reproducible:
100%

Steps to Reproduce:
1. Install OCP with CNS
2.
3.

Actual results:
# ansible-playbook -i inventory -vv /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
...
TASK [openshift_storage_glusterfs : Wait for GlusterFS pods] *********************************************************************************************************************************
task path: /usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/tasks/glusterfs_deploy.yml:102
Wednesday 23 May 2018  16:47:00 +0800 (0:00:02.212)       0:11:48.194 ********* 
FAILED - RETRYING: Wait for GlusterFS pods (30 retries left).
...
FAILED - RETRYING: Wait for GlusterFS pods (1 retries left).
fatal: [qe-weshi-cns-master-etcd-1.0523-wz1.qe.rhcloud.com]: FAILED! => {"attempts": 30, "changed": false, "failed": true, "results": {"cmd": "/usr/bin/oc get pod --selector=glusterfs=storage-pod -o json -n glusterfs", "results": [{"apiVersion": "v1", "items": [{"apiVersion": "v1", "kind": "Pod", "metadata": {"annotations": {"openshift.io/scc": "privileged"}, "creationTimestamp": "2018-05-23T08:47:07Z", "generateName": "glusterfs-storage-", "labels": {"controller-revision-hash": "2444950576", "glusterfs": "storage-pod", "glusterfs-node": "pod", "pod-template-generation": "1"}, "name": "glusterfs-storage-8hs2s", "namespace": "glusterfs", "ownerReferences": [{"apiVersion": "apps/v1", "blockOwnerDeletion": true, "controller": true, "kind": "DaemonSet", "name": "glusterfs-storage", "uid": "e0007487-5e65-11e8-bd1b-42010af00002"}], "resourceVersion": "2597", "selfLink": "/api/v1/namespaces/glusterfs/pods/glusterfs-storage-8hs2s", "uid": "e00a887e-5e65-11e8-bd1b-42010af00002"}, "spec": {"containers": [{"env": [{"name": "GB_GLFS_LRU_COUNT", "value": "15"}, {"name": "TCMU_LOGDIR", "value": "/var/log/glusterfs/gluster-block"}, {"name": "GB_LOGDIR", "value": "/var/log/glusterfs/gluster-block"}], "image": "rhgs3/rhgs-server-rhel7:latest", "imagePullPolicy": "IfNotPresent", "livenessProbe": {"exec": {"command": ["/bin/bash", "-c", "systemctl status glusterd.service"]}, "failureThreshold": 50, "initialDelaySeconds": 40, "periodSeconds": 25, "successThreshold": 1, "timeoutSeconds": 3}, "name": "glusterfs", "readinessProbe": {"exec": {"command": ["/bin/bash", "-c", "systemctl status glusterd.service"]}, "failureThreshold": 50, "initialDelaySeconds": 40, "periodSeconds": 25, "successThreshold": 1, "timeoutSeconds": 3}, "resources": {"requests": {"cpu": "100m", "memory": "100Mi"}}, "securityContext": {"capabilities": {}, "privileged": true}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [{"mountPath": "/var/lib/heketi", "name": "glusterfs-heketi"}, {"mountPath": "/run", "name": "glusterfs-run"}, {"mountPath": "/run/lvm", "name": "glusterfs-lvm"}, {"mountPath": "/etc/glusterfs", "name": "glusterfs-etc"}, {"mountPath": "/var/log/glusterfs", "name": "glusterfs-logs"}, {"mountPath": "/var/lib/glusterd", "name": "glusterfs-config"}, {"mountPath": "/dev", "name": "glusterfs-dev"}, {"mountPath": "/var/lib/misc/glusterfsd", "name": "glusterfs-misc"}, {"mountPath": "/sys/fs/cgroup", "name": "glusterfs-cgroup", "readOnly": true}, {"mountPath": "/etc/ssl", "name": "glusterfs-ssl", "readOnly": true}, {"mountPath": "/usr/lib/modules", "name": "kernel-modules", "readOnly": true}, {"mountPath": "/etc/target", "name": "glusterfs-target"}, {"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", "name": "default-token-9662s", "readOnly": true}]}], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "imagePullSecrets": [{"name": "default-dockercfg-fgqb5"}], "nodeName": "qe-weshi-cns-glusterfs-node-1", "nodeSelector": {"glusterfs": "storage-host"}, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "default", "serviceAccountName": "default", "terminationGracePeriodSeconds": 30, "tolerations": [{"effect": "NoSchedule", "key": "node.kubernetes.io/memory-pressure", "operator": "Exists"}, {"effect": "NoSchedule", "key": "node.kubernetes.io/disk-pressure", "operator": "Exists"}, {"effect": "NoExecute", "key": "node.kubernetes.io/not-ready", "operator": "Exists"}, {"effect": "NoExecute", "key": "node.kubernetes.io/unreachable", "operator": "Exists"}], "volumes": [{"hostPath": {"path": "/var/lib/heketi", "type": ""}, "name": "glusterfs-heketi"}, {"emptyDir": {}, "name": "glusterfs-run"}, {"hostPath": {"path": "/run/lvm", "type": ""}, "name": "glusterfs-lvm"}, {"hostPath": {"path": "/etc/glusterfs", "type": ""}, "name": "glusterfs-etc"}, {"hostPath": {"path": "/var/log/glusterfs", "type": ""}, "name": "glusterfs-logs"}, {"hostPath": {"path": "/var/lib/glusterd", "type": ""}, "name": "glusterfs-config"}, {"hostPath": {"path": "/dev", "type": ""}, "name": "glusterfs-dev"}, {"hostPath": {"path": "/var/lib/misc/glusterfsd", "type": ""}, "name": "glusterfs-misc"}, {"hostPath": {"path": "/sys/fs/cgroup", "type": ""}, "name": "glusterfs-cgroup"}, {"hostPath": {"path": "/etc/ssl", "type": ""}, "name": "glusterfs-ssl"}, {"hostPath": {"path": "/usr/lib/modules", "type": ""}, "name": "kernel-modules"}, {"hostPath": {"path": "/etc/target", "type": ""}, "name": "glusterfs-target"}, {"name": "default-token-9662s", "secret": {"defaultMode": 420, "secretName": "default-token-9662s"}}]}, "status": {"conditions": [{"lastProbeTime": null, "lastTransitionTime": "2018-05-23T08:47:08Z", "status": "True", "type": "Initialized"}, {"lastProbeTime": null, "lastTransitionTime": "2018-05-23T08:47:08Z", "message": "containers with unready status: [glusterfs]", "reason": "ContainersNotReady", "status": "False", "type": "Ready"}, {"lastProbeTime": null, "lastTransitionTime": "2018-05-23T08:47:08Z", "status": "True", "type": "PodScheduled"}], "containerStatuses": [{"image": "rhgs3/rhgs-server-rhel7:latest", "imageID": "", "lastState": {}, "name": "glusterfs", "ready": false, "restartCount": 0, "state": {"waiting": {"message": "Back-off pulling image \"rhgs3/rhgs-server-rhel7:latest\"", "reason": "ImagePullBackOff"}}}], "hostIP": "10.240.0.4", "phase": "Pending", "podIP": "10.240.0.4", "qosClass": "Burstable", "startTime": "2018-05-23T08:47:08Z"}}, {"apiVersion": "v1", "kind": "Pod", "metadata": {"annotations": {"openshift.io/scc": "privileged"}, "creationTimestamp": "2018-05-23T08:47:07Z", "generateName": "glusterfs-storage-", "labels": {"controller-revision-hash": "2444950576", "glusterfs": "storage-pod", "glusterfs-node": "pod", "pod-template-generation": "1"}, "name": "glusterfs-storage-9ls5f", "namespace": "glusterfs", "ownerReferences": [{"apiVersion": "apps/v1", "blockOwnerDeletion": true, "controller": true, "kind": "DaemonSet", "name": "glusterfs-storage", "uid": "e0007487-5e65-11e8-bd1b-42010af00002"}], "resourceVersion": "2589", "selfLink": "/api/v1/namespaces/glusterfs/pods/glusterfs-storage-9ls5f", "uid": "e00a2698-5e65-11e8-bd1b-42010af00002"}, "spec": {"containers": [{"env": [{"name": "GB_GLFS_LRU_COUNT", "value": "15"}, {"name": "TCMU_LOGDIR", "value": "/var/log/glusterfs/gluster-block"}, {"name": "GB_LOGDIR", "value": "/var/log/glusterfs/gluster-block"}], "image": "rhgs3/rhgs-server-rhel7:latest", "imagePullPolicy": "IfNotPresent", "livenessProbe": {"exec": {"command": ["/bin/bash", "-c", "systemctl status glusterd.service"]}, "failureThreshold": 50, "initialDelaySeconds": 40, "periodSeconds": 25, "successThreshold": 1, "timeoutSeconds": 3}, "name": "glusterfs", "readinessProbe": {"exec": {"command": ["/bin/bash", "-c", "systemctl status glusterd.service"]}, "failureThreshold": 50, "initialDelaySeconds": 40, "periodSeconds": 25, "successThreshold": 1, "timeoutSeconds": 3}, "resources": {"requests": {"cpu": "100m", "memory": "100Mi"}}, "securityContext": {"capabilities": {}, "privileged": true}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [{"mountPath": "/var/lib/heketi", "name": "glusterfs-heketi"}, {"mountPath": "/run", "name": "glusterfs-run"}, {"mountPath": "/run/lvm", "name": "glusterfs-lvm"}, {"mountPath": "/etc/glusterfs", "name": "glusterfs-etc"}, {"mountPath": "/var/log/glusterfs", "name": "glusterfs-logs"}, {"mountPath": "/var/lib/glusterd", "name": "glusterfs-config"}, {"mountPath": "/dev", "name": "glusterfs-dev"}, {"mountPath": "/var/lib/misc/glusterfsd", "name": "glusterfs-misc"}, {"mountPath": "/sys/fs/cgroup", "name": "glusterfs-cgroup", "readOnly": true}, {"mountPath": "/etc/ssl", "name": "glusterfs-ssl", "readOnly": true}, {"mountPath": "/usr/lib/modules", "name": "kernel-modules", "readOnly": true}, {"mountPath": "/etc/target", "name": "glusterfs-target"}, {"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", "name": "default-token-9662s", "readOnly": true}]}], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "imagePullSecrets": [{"name": "default-dockercfg-fgqb5"}], "nodeName": "qe-weshi-cns-glusterfs-node-2", "nodeSelector": {"glusterfs": "storage-host"}, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "default", "serviceAccountName": "default", "terminationGracePeriodSeconds": 30, "tolerations": [{"effect": "NoSchedule", "key": "node.kubernetes.io/memory-pressure", "operator": "Exists"}, {"effect": "NoExecute", "key": "node.kubernetes.io/not-ready", "operator": "Exists"}, {"effect": "NoExecute", "key": "node.kubernetes.io/unreachable", "operator": "Exists"}, {"effect": "NoSchedule", "key": "node.kubernetes.io/disk-pressure", "operator": "Exists"}], "volumes": [{"hostPath": {"path": "/var/lib/heketi", "type": ""}, "name": "glusterfs-heketi"}, {"emptyDir": {}, "name": "glusterfs-run"}, {"hostPath": {"path": "/run/lvm", "type": ""}, "name": "glusterfs-lvm"}, {"hostPath": {"path": "/etc/glusterfs", "type": ""}, "name": "glusterfs-etc"}, {"hostPath": {"path": "/var/log/glusterfs", "type": ""}, "name": "glusterfs-logs"}, {"hostPath": {"path": "/var/lib/glusterd", "type": ""}, "name": "glusterfs-config"}, {"hostPath": {"path": "/dev", "type": ""}, "name": "glusterfs-dev"}, {"hostPath": {"path": "/var/lib/misc/glusterfsd", "type": ""}, "name": "glusterfs-misc"}, {"hostPath": {"path": "/sys/fs/cgroup", "type": ""}, "name": "glusterfs-cgroup"}, {"hostPath": {"path": "/etc/ssl", "type": ""}, "name": "glusterfs-ssl"}, {"hostPath": {"path": "/usr/lib/modules", "type": ""}, "name": "kernel-modules"}, {"hostPath": {"path": "/etc/target", "type": ""}, "name": "glusterfs-target"}, {"name": "default-token-9662s", "secret": {"defaultMode": 420, "secretName": "default-token-9662s"}}]}, "status": {"conditions": [{"lastProbeTime": null, "lastTransitionTime": "2018-05-23T08:47:07Z", "status": "True", "type": "Initialized"}, {"lastProbeTime": null, "lastTransitionTime": "2018-05-23T08:47:07Z", "message": "containers with unready status: [glusterfs]", "reason": "ContainersNotReady", "status": "False", "type": "Ready"}, {"lastProbeTime": null, "lastTransitionTime": "2018-05-23T08:47:07Z", "status": "True", "type": "PodScheduled"}], "containerStatuses": [{"image": "rhgs3/rhgs-server-rhel7:latest", "imageID": "", "lastState": {}, "name": "glusterfs", "ready": false, "restartCount": 0, "state": {"waiting": {"message": "Back-off pulling image \"rhgs3/rhgs-server-rhel7:latest\"", "reason": "ImagePullBackOff"}}}], "hostIP": "10.240.0.5", "phase": "Pending", "podIP": "10.240.0.5", "qosClass": "Burstable", "startTime": "2018-05-23T08:47:07Z"}}, {"apiVersion": "v1", "kind": "Pod", "metadata": {"annotations": {"openshift.io/scc": "privileged"}, "creationTimestamp": "2018-05-23T08:47:07Z", "generateName": "glusterfs-storage-", "labels": {"controller-revision-hash": "2444950576", "glusterfs": "storage-pod", "glusterfs-node": "pod", "pod-template-generation": "1"}, "name": "glusterfs-storage-qnbrr", "namespace": "glusterfs", "ownerReferences": [{"apiVersion": "apps/v1", "blockOwnerDeletion": true, "controller": true, "kind": "DaemonSet", "name": "glusterfs-storage", "uid": "e0007487-5e65-11e8-bd1b-42010af00002"}], "resourceVersion": "2593", "selfLink": "/api/v1/namespaces/glusterfs/pods/glusterfs-storage-qnbrr", "uid": "e007793e-5e65-11e8-bd1b-42010af00002"}, "spec": {"containers": [{"env": [{"name": "GB_GLFS_LRU_COUNT", "value": "15"}, {"name": "TCMU_LOGDIR", "value": "/var/log/glusterfs/gluster-block"}, {"name": "GB_LOGDIR", "value": "/var/log/glusterfs/gluster-block"}], "image": "rhgs3/rhgs-server-rhel7:latest", "imagePullPolicy": "IfNotPresent", "livenessProbe": {"exec": {"command": ["/bin/bash", "-c", "systemctl status glusterd.service"]}, "failureThreshold": 50, "initialDelaySeconds": 40, "periodSeconds": 25, "successThreshold": 1, "timeoutSeconds": 3}, "name": "glusterfs", "readinessProbe": {"exec": {"command": ["/bin/bash", "-c", "systemctl status glusterd.service"]}, "failureThreshold": 50, "initialDelaySeconds": 40, "periodSeconds": 25, "successThreshold": 1, "timeoutSeconds": 3}, "resources": {"requests": {"cpu": "100m", "memory": "100Mi"}}, "securityContext": {"capabilities": {}, "privileged": true}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [{"mountPath": "/var/lib/heketi", "name": "glusterfs-heketi"}, {"mountPath": "/run", "name": "glusterfs-run"}, {"mountPath": "/run/lvm", "name": "glusterfs-lvm"}, {"mountPath": "/etc/glusterfs", "name": "glusterfs-etc"}, {"mountPath": "/var/log/glusterfs", "name": "glusterfs-logs"}, {"mountPath": "/var/lib/glusterd", "name": "glusterfs-config"}, {"mountPath": "/dev", "name": "glusterfs-dev"}, {"mountPath": "/var/lib/misc/glusterfsd", "name": "glusterfs-misc"}, {"mountPath": "/sys/fs/cgroup", "name": "glusterfs-cgroup", "readOnly": true}, {"mountPath": "/etc/ssl", "name": "glusterfs-ssl", "readOnly": true}, {"mountPath": "/usr/lib/modules", "name": "kernel-modules", "readOnly": true}, {"mountPath": "/etc/target", "name": "glusterfs-target"}, {"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", "name": "default-token-9662s", "readOnly": true}]}], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "imagePullSecrets": [{"name": "default-dockercfg-fgqb5"}], "nodeName": "qe-weshi-cns-glusterfs-node-3", "nodeSelector": {"glusterfs": "storage-host"}, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "default", "serviceAccountName": "default", "terminationGracePeriodSeconds": 30, "tolerations": [{"effect": "NoSchedule", "key": "node.kubernetes.io/memory-pressure", "operator": "Exists"}, {"effect": "NoExecute", "key": "node.kubernetes.io/not-ready", "operator": "Exists"}, {"effect": "NoExecute", "key": "node.kubernetes.io/unreachable", "operator": "Exists"}, {"effect": "NoSchedule", "key": "node.kubernetes.io/disk-pressure", "operator": "Exists"}], "volumes": [{"hostPath": {"path": "/var/lib/heketi", "type": ""}, "name": "glusterfs-heketi"}, {"emptyDir": {}, "name": "glusterfs-run"}, {"hostPath": {"path": "/run/lvm", "type": ""}, "name": "glusterfs-lvm"}, {"hostPath": {"path": "/etc/glusterfs", "type": ""}, "name": "glusterfs-etc"}, {"hostPath": {"path": "/var/log/glusterfs", "type": ""}, "name": "glusterfs-logs"}, {"hostPath": {"path": "/var/lib/glusterd", "type": ""}, "name": "glusterfs-config"}, {"hostPath": {"path": "/dev", "type": ""}, "name": "glusterfs-dev"}, {"hostPath": {"path": "/var/lib/misc/glusterfsd", "type": ""}, "name": "glusterfs-misc"}, {"hostPath": {"path": "/sys/fs/cgroup", "type": ""}, "name": "glusterfs-cgroup"}, {"hostPath": {"path": "/etc/ssl", "type": ""}, "name": "glusterfs-ssl"}, {"hostPath": {"path": "/usr/lib/modules", "type": ""}, "name": "kernel-modules"}, {"hostPath": {"path": "/etc/target", "type": ""}, "name": "glusterfs-target"}, {"name": "default-token-9662s", "secret": {"defaultMode": 420, "secretName": "default-token-9662s"}}]}, "status": {"conditions": [{"lastProbeTime": null, "lastTransitionTime": "2018-05-23T08:47:07Z", "status": "True", "type": "Initialized"}, {"lastProbeTime": null, "lastTransitionTime": "2018-05-23T08:47:07Z", "message": "containers with unready status: [glusterfs]", "reason": "ContainersNotReady", "status": "False", "type": "Ready"}, {"lastProbeTime": null, "lastTransitionTime": "2018-05-23T08:47:07Z", "status": "True", "type": "PodScheduled"}], "containerStatuses": [{"image": "rhgs3/rhgs-server-rhel7:latest", "imageID": "", "lastState": {}, "name": "glusterfs", "ready": false, "restartCount": 0, "state": {"waiting": {"message": "Back-off pulling image \"rhgs3/rhgs-server-rhel7:latest\"", "reason": "ImagePullBackOff"}}}], "hostIP": "10.240.0.6", "phase": "Pending", "podIP": "10.240.0.6", "qosClass": "Burstable", "startTime": "2018-05-23T08:47:07Z"}}], "kind": "List", "metadata": {"resourceVersion": "", "selfLink": ""}}], "returncode": 0}, "state": "list"}

# rpm -q atomic-openshift
atomic-openshift-3.10.0-0.50.0.git.0.db6dfd6.el7.x86_64
# oc describe po glusterfs-storage-8hs2s
...
    Image:          rhgs3/rhgs-server-rhel7:latest
    Image ID:       
...
Events:
  Type     Reason   Age              From                                    Message
  ----     ------   ----             ----                                    -------
  Normal   Pulling  5m               kubelet, qe-weshi-cns-glusterfs-node-1  pulling image "rhgs3/rhgs-server-rhel7:latest"
  Warning  Failed   5m               kubelet, qe-weshi-cns-glusterfs-node-1  Failed to pull image "rhgs3/rhgs-server-rhel7:latest": rpc error: code = Unknown desc = repository docker.io/rhgs3/rhgs-server-rhel7 not found: does not exist or no pull access
  Warning  Failed   5m               kubelet, qe-weshi-cns-glusterfs-node-1  Error: ErrImagePull
  Normal   BackOff  5m               kubelet, qe-weshi-cns-glusterfs-node-1  Back-off pulling image "rhgs3/rhgs-server-rhel7:latest"
  Warning  Failed   5m               kubelet, qe-weshi-cns-glusterfs-node-1  Error: ImagePullBackOff
  Normal   Pulling  3m (x4 over 4m)  kubelet, qe-weshi-cns-glusterfs-node-1  pulling image "rhgs3/rhgs-server-rhel7:latest"
  Warning  Failed   3m (x4 over 4m)  kubelet, qe-weshi-cns-glusterfs-node-1  Failed to pull image "rhgs3/rhgs-server-rhel7:latest": rpc error: code = Unknown desc = repository docker.io/rhgs3/rhgs-server-rhel7 not found: does not exist or no pull access
  Warning  Failed   3m (x4 over 4m)  kubelet, qe-weshi-cns-glusterfs-node-1  Error: ErrImagePull
  Warning  Failed   2m (x6 over 4m)  kubelet, qe-weshi-cns-glusterfs-node-1  Error: ImagePullBackOff
  Normal   BackOff  2m (x7 over 4m)  kubelet, qe-weshi-cns-glusterfs-node-1  Back-off pulling image "rhgs3/rhgs-server-rhel7:latest"

Expected results:
GlusterFS pods shouldn't try to pull rhgs3/rhgs-server-rhel7:latest from docker.io.

Additional info:
It only appear in atomic-openshift-3.10.0-0.50.0.git.0.db6dfd6.el7, when deploy with atomic-openshift-3.10.0-0.47.0.git.0.2fffa04.el7, it pull image from registry.access.redhat.com:

# rpm -q atomic-openshift
atomic-openshift-3.10.0-0.47.0.git.0.2fffa04.el7
# oc describe po glusterfs-storage-2bwct -n glusterfs
...
    Image:          rhgs3/rhgs-server-rhel7:latest
    Image ID:       docker-pullable://registry.access.redhat.com/rhgs3/rhgs-server-rhel7@sha256:42dc831e5452cf3371fd16cfb96944a5d3d4a16e2f5833d69ee2805c92f3c8a2
...

Also, it works well in 3.9, looks like a regression.

Comment 3 Jose A. Rivera 2018-05-24 14:22:22 UTC
We haven't changed anything about this from 0.47 to 0.50. Did something change with regard to how the default registry is configured?

Comment 4 Scott Dodson 2018-05-24 19:45:45 UTC
Wenkai,

Any chance that openshift_storage_glusterfs_image was set in your previous run but not these? I agree that there haven't been any changes to this code in quite some time.

Comment 5 Wenkai Shi 2018-05-25 02:46:53 UTC
(In reply to Scott Dodson from comment #4)
> Wenkai,
> 
> Any chance that openshift_storage_glusterfs_image was set in your previous
> run but not these? I agree that there haven't been any changes to this code
> in quite some time.

Hi Scott,
I didn't set openshift_storage_glusterfs_image in any of this bug's installation.

AS I mentioned, it may related to atomic-openshift rather than openshift-ansible, I've try deploy atomic-openshift-3.10.0-0.47.0.git.0.2fffa04.el7 with openshift-ansible-3.10.0-0.50.0.git.0.bd68ade.el7, It's doesn't appear.

This issue does appear in atomic-openshift-3.10.0-0.50.0.git.0.db6dfd6.el7 so far.
Give a shoot with version atomic-openshift-3.10.0-0.51.0.git.0.8bcf033.el7, still appear.

Comment 8 Brenton Leanhardt 2018-05-30 19:29:50 UTC
Jose, have you been able to reproduce this bug by any chance?  It would be very helpful to try Wenkai's reproducer and confirm it did indeed actually start happening after 3.10.0-0.47.0.

Comment 9 Jose A. Rivera 2018-05-30 20:03:42 UTC
I have not been able to reproduce this, I don't have a reproducible downstream environment to test on. Moving this back to modified pending further results from Wenkai. I'll try to get a downstream environment running somewhere.

Comment 10 Wenkai Shi 2018-05-31 09:57:17 UTC
It could be work around by set following parameters:

openshift_storage_glusterfs_heketi_image=registry.access.redhat.com/rhgs3/rhgs-volmanager-rhel7
openshift_storage_glusterfs_image=registry.access.redhat.com/rhgs3/rhgs-server-rhel7
openshift_storage_glusterfs_block_image=registry.access.redhat.com/rhgs3/rhgs-gluster-block-prov-rhel7
openshift_storage_glusterfs_s3_image=registry.access.redhat.com/rhgs3/rhgs-gluster-s3-server-rhel7

Comment 14 Wenkai Shi 2018-06-11 03:22:36 UTC
It's related:

https://github.com/openshift/openshift-ansible/pull/8585

Comment 15 Brenton Leanhardt 2018-06-11 12:27:29 UTC
Seth, is this releated to https://bugzilla.redhat.com/show_bug.cgi?id=1583500 ?

Comment 16 Scott Dodson 2018-06-11 12:54:29 UTC
Wenkai,

This problem should no longer exist in openshift-ansible-3.10.0-0.57.0 as long as oreg_url is set, have you tested this version?


Neha, Brenton,

Yes, but by default the installer will set fully qualified names, the inventory you're using bypasses this. I'd suggest setting fully qualified names for these variables you're setting :

openshift_storage_glusterfs_heketi_image='brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhgs3/rhgs-volmanager-rhel7'
 
openshift_storage_glusterfs_block_image='brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhgs3/rhgs-gluster-block-prov-rhel7'
 
openshift_storage_glusterfs_image='brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhgs3/rhgs-server-rhel7'

Relying on the additional registries functionality is something we should put a stop to in 3.10 at least in installer code. For origin it may be wise to preserve existing behavior while making it clear that it's deprecated.

Comment 18 Wenkai Shi 2018-06-12 02:15:30 UTC
(In reply to Scott Dodson from comment #16)
> Wenkai,
> 
> This problem should no longer exist in openshift-ansible-3.10.0-0.57.0 as
> long as oreg_url is set, have you tested this version?
> 

Yes, I've tested this version. I can verify it once move to ON_QA.

Comment 19 Scott Dodson 2018-06-12 12:55:25 UTC
Moving ON_QA

Comment 20 Wenkai Shi 2018-06-13 02:27:58 UTC
Verified with version openshift-ansible-3.10.0-0.64.0.git.20.48df973.el7, it's can pull docker image from registry defined in oreg_url.

Comment 21 Takayoshi Tanaka 2018-06-16 08:41:10 UTC
The customer is facing the similar issue when installing CNS with OpenShift 3.9. Could you tell me setting the following parameters can be worked around?

openshift_storage_glusterfs_heketi_image=registry.access.redhat.com/rhgs3/rhgs-volmanager-rhel7
openshift_storage_glusterfs_image=registry.access.redhat.com/rhgs3/rhgs-server-rhel7
openshift_storage_glusterfs_block_image=registry.access.redhat.com/rhgs3/rhgs-gluster-block-prov-rhel7
openshift_storage_glusterfs_s3_image=registry.access.redhat.com/rhgs3/rhgs-gluster-s3-server-rhel7

After setting these parameteres, the customer now faces the following error.
>                exception: 'glusterfs_heketi_url' is undefined

Comment 22 Scott Dodson 2018-06-18 20:29:14 UTC
Takayoshi,

The problems in 3.9 are not the same as what's being discussed in this bug or at least do not have the same fix as they will on 3.10.

Can you please open a new bug with as much log output as possible and the inventory file you're using when you encounter the error on 3.9? Those configuration variables should be fine.

Comment 24 Takayoshi Tanaka 2018-06-19 00:24:03 UTC
Scott,

Thank you for the reply.
We'll check the case again and create a new BZ if needed.

Comment 25 Wenkai Shi 2018-06-20 02:29:10 UTC
(In reply to Takayoshi Tanaka from comment #21)
> The customer is facing the similar issue when installing CNS with OpenShift
> 3.9. Could you tell me setting the following parameters can be worked around?
> 
> openshift_storage_glusterfs_heketi_image=registry.access.redhat.com/rhgs3/
> rhgs-volmanager-rhel7
> openshift_storage_glusterfs_image=registry.access.redhat.com/rhgs3/rhgs-
> server-rhel7
> openshift_storage_glusterfs_block_image=registry.access.redhat.com/rhgs3/
> rhgs-gluster-block-prov-rhel7
> openshift_storage_glusterfs_s3_image=registry.access.redhat.com/rhgs3/rhgs-
> gluster-s3-server-rhel7
> 
> After setting these parameteres, the customer now faces the following error.
> >                exception: 'glusterfs_heketi_url' is undefined

We have BZ #1583148 to track this issue in 3.9.

For error "'glusterfs_heketi_url' is undefined", looks like another issue.

Comment 27 errata-xmlrpc 2018-07-30 19:16:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1816


Note You need to log in before you can comment on or make changes to this bug.