Bug 1615982
| Summary: | Installer fails on "Wait for GlusterFS pods" | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Matt Bruzek <mbruzek> |
| Component: | Installer | Assignee: | Jose A. Rivera <jarrpa> |
| Status: | CLOSED ERRATA | QA Contact: | Wenkai Shi <weshi> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 3.11.0 | CC: | aos-bugs, jokerman, mmccomas, scuppett, weshi, wsun, xtian |
| Target Milestone: | --- | Keywords: | TestBlocker |
| Target Release: | 3.11.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2018-10-11 07:24:57 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1615264 | ||
I added some more debug statements after the task (and set failed_when: False so the task would pass) and the output is here:
TASK [openshift_storage_glusterfs : Print the status conditions] ***************
task path: /home/cloud-user/openshift-ansible/roles/openshift_storage_glusterfs/tasks/wait_for_pods.yml:32
Tuesday 14 August 2018 15:24:53 -0400 (0:00:00.130) 0:25:21.497 ********
ok: [master-0.scale-ci.example.com] => {
"glusterfs_pods_wait.results.results[0]['items'] | lib_utils_oo_collect(attribute='status.conditions')": [
[
{
"lastProbeTime": null,
"lastTransitionTime": "2018-08-14T19:19:41Z",
"status": "True",
"type": "Initialized"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2018-08-14T19:20:56Z",
"status": "True",
"type": "Ready"
},
{
"lastProbeTime": null,
"lastTransitionTime": null,
"status": "True",
"type": "ContainersReady"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2018-08-14T19:19:41Z",
"status": "True",
"type": "PodScheduled"
}
],
[
{
"lastProbeTime": null,
"lastTransitionTime": "2018-08-14T19:19:41Z",
"status": "True",
"type": "Initialized"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2018-08-14T19:20:34Z",
"status": "True",
"type": "Ready"
},
{
"lastProbeTime": null,
"lastTransitionTime": null,
"status": "True",
"type": "ContainersReady"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2018-08-14T19:19:41Z",
"status": "True",
"type": "PodScheduled"
}
],
[
{
"lastProbeTime": null,
"lastTransitionTime": "2018-08-14T19:19:41Z",
"status": "True",
"type": "Initialized"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2018-08-14T19:20:56Z",
"status": "True",
"type": "Ready"
},
{
"lastProbeTime": null,
"lastTransitionTime": null,
"status": "True",
"type": "ContainersReady"
},
{
"lastProbeTime": null,
"lastTransitionTime": "2018-08-14T19:19:41Z",
"status": "True",
"type": "PodScheduled"
}
]
]
}
TASK [openshift_storage_glusterfs : Print the pods in ready state.] ************
task path: /home/cloud-user/openshift-ansible/roles/openshift_storage_glusterfs/tasks/wait_for_pods.yml:36
Tuesday 14 August 2018 15:24:53 -0400 (0:00:00.083) 0:25:21.580 ********
ok: [master-0.scale-ci.example.com] => {
"glusterfs_pods_wait.results.results[0]['items'] | lib_utils_oo_collect(attribute='status.conditions') | lib_utils_oo_collect(attribute='status', filters={'type': 'Ready'})": [
"True",
"True",
"True"
]
}
*** Bug 1617949 has been marked as a duplicate of this bug. *** Verified with version openshift-ansible-3.11.0-0.25.0.git.0.7497e69.el7, installer doesn't failed on "Wait for GlusterFS pods". Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:2652 |
Description of problem: While running the openshift-ansible installer I got a failure waiting for GlusterFS pods. I believe the pods are up and running and the logic for the wait task is incorrect. Before the install failed I was able to query the glusterfs pods on the master and they are all running. root@master-0: /home/openshift # oc get pods -n glusterfs NAME READY STATUS RESTARTS AGE glusterfs-storage-5nvhv 1/1 Running 0 4m glusterfs-storage-bjkbr 1/1 Running 0 4m glusterfs-storage-tslcd 1/1 Running 0 4m Version-Release number of the following components: ansible 2.6.0 config file = /etc/ansible/ansible.cfg configured module search path = [u'/home/cloud-user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Jul 16 2018, 19:52:45) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] How reproducible: 100% of the time Steps to Reproduce: 1. Install OpenShift on OpenStack using glusterfs. 2. Manually approve all pending CSRs. 3. Notice the install fails waiting for GlusterFS pods. Actual results: The wait task retries several times to check for the glustefs pods being Ready and it looks like the pods are in Ready state but the task retries until failure. Failure summary: 1. Hosts: master-0.scale-ci.example.com Play: Configure GlusterFS Task: Wait for GlusterFS pods Message: Failed without returning a message. Additional info: I believe the problem lies in the wait_for_pods.yml file: https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_storage_glusterfs/tasks/wait_for_pods.yml#L12 According to the log output from the failing task there are 3 glusterfs pods with status.conditions -> status=True, type=Ready in the final output. This should have passed the when condition and moved on. fatal: [master-1.scale-ci.example.com]: FAILED! => { "attempts": 30, "changed": false, "invocation": { "module_args": { "all_namespaces": null, "content": null, "debug": false, "delete_after": false, "field_selector": null, "files": null, "force": false, "kind": "pod", "kubeconfig": "/etc/origin/master/admin.kubeconfig", "name": null, "namespace": "glusterfs", "selector": "glusterfs=storage-pod", "state": "list" } }, "results": { "cmd": "/bin/oc get pod --selector=glusterfs=storage-pod -o json -n glusterfs", "results": [ { "apiVersion": "v1", "items": [ { "apiVersion": "v1", "kind": "Pod", "metadata": { "annotations": { "openshift.io/scc": "privileged" }, "creationTimestamp": "2018-08-14T16:02:23Z", "generateName": "glusterfs-storage-", "labels": { "controller-revision-hash": "3129677946", "glusterfs": "storage-pod", "glusterfs-node": "pod", "pod-template-generation": "1" }, "name": "glusterfs-storage-5nvhv", "namespace": "glusterfs", "ownerReferences": [ { "apiVersion": "apps/v1", "blockOwnerDeletion": true, "controller": true, "kind": "DaemonSet", "name": "glusterfs-storage", "uid": "6eacc84d-9fdb-11e8-a9a4-fa163e0c8137" } ], "resourceVersion": "2514", "selfLink": "/api/v1/namespaces/glusterfs/pods/glusterfs-storage-5nvhv", "uid": "6eb155e9-9fdb-11e8-b9ef-fa163e1c2665" }, "spec": { "containers": [ { "env": [ { "name": "GB_GLFS_LRU_COUNT", "value": "15" }, { "name": "TCMU_LOGDIR", "value": "/var/log/glusterfs/gluster-block" }, { "name": "GB_LOGDIR", "value": "/var/log/glusterfs/gluster-block" } ], "image": "brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhgs3/rhgs-server-rhel7:3.3.1-28", "imagePullPolicy": "IfNotPresent", "livenessProbe": { "exec": { "command": [ "/bin/bash", "-c", "systemctl status glusterd.service" ] }, "failureThreshold": 50, "initialDelaySeconds": 40, "periodSeconds": 25, "successThreshold": 1, "timeoutSeconds": 3 }, "name": "glusterfs", "readinessProbe": { "exec": { "command": [ "/bin/bash", "-c", "systemctl status glusterd.service" ] }, "failureThreshold": 50, "initialDelaySeconds": 40, "periodSeconds": 25, "successThreshold": 1, "timeoutSeconds": 3 }, "resources": { "requests": { "cpu": "100m", "memory": "100Mi" } }, "securityContext": { "capabilities": {}, "privileged": true }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/var/lib/heketi", "name": "glusterfs-heketi" }, { "mountPath": "/run", "name": "glusterfs-run" }, { "mountPath": "/run/lvm", "name": "glusterfs-lvm" }, { "mountPath": "/etc/glusterfs", "name": "glusterfs-etc" }, { "mountPath": "/var/log/glusterfs", "name": "glusterfs-logs" }, { "mountPath": "/var/lib/glusterd", "name": "glusterfs-config" }, { "mountPath": "/dev", "name": "glusterfs-dev" }, { "mountPath": "/var/lib/misc/glusterfsd", "name": "glusterfs-misc" }, { "mountPath": "/sys/fs/cgroup", "name": "glusterfs-cgroup", "readOnly": true }, { "mountPath": "/etc/ssl", "name": "glusterfs-ssl", "readOnly": true }, { "mountPath": "/usr/lib/modules", "name": "kernel-modules", "readOnly": true }, { "mountPath": "/etc/target", "name": "glusterfs-target" }, { "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", "name": "default-token-hv78z", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "imagePullSecrets": [ { "name": "default-dockercfg-9qgs7" } ], "nodeName": "cns-0.scale-ci.example.com", "nodeSelector": { "glusterfs": "storage-host" }, "priority": 0, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "default", "serviceAccountName": "default", "terminationGracePeriodSeconds": 30, "tolerations": [ { "effect": "NoSchedule", "key": "node.kubernetes.io/memory-pressure", "operator": "Exists" }, { "effect": "NoExecute", "key": "node.kubernetes.io/not-ready", "operator": "Exists" }, { "effect": "NoExecute", "key": "node.kubernetes.io/unreachable", "operator": "Exists" }, { "effect": "NoSchedule", "key": "node.kubernetes.io/disk-pressure", "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/var/lib/heketi", "type": "" }, "name": "glusterfs-heketi" }, { "emptyDir": {}, "name": "glusterfs-run" }, { "hostPath": { "path": "/run/lvm", "type": "" }, "name": "glusterfs-lvm" }, { "hostPath": { "path": "/etc/glusterfs", "type": "" }, "name": "glusterfs-etc" }, { "hostPath": { "path": "/var/log/glusterfs", "type": "" }, "name": "glusterfs-logs" }, { "hostPath": { "path": "/var/lib/glusterd", "type": "" }, "name": "glusterfs-config" }, { "hostPath": { "path": "/dev", "type": "" }, "name": "glusterfs-dev" }, { "hostPath": { "path": "/var/lib/misc/glusterfsd", "type": "" }, "name": "glusterfs-misc" }, { "hostPath": { "path": "/sys/fs/cgroup", "type": "" }, "name": "glusterfs-cgroup" }, { "hostPath": { "path": "/etc/ssl", "type": "" }, "name": "glusterfs-ssl" }, { "hostPath": { "path": "/usr/lib/modules", "type": "" }, "name": "kernel-modules" }, { "hostPath": { "path": "/etc/target", "type": "" }, "name": "glusterfs-target" }, { "name": "default-token-hv78z", "secret": { "defaultMode": 420, "secretName": "default-token-hv78z" } } ] }, "status": { "conditions": [ { "lastProbeTime": null, "lastTransitionTime": "2018-08-14T16:02:23Z", "status": "True", "type": "Initialized" }, { "lastProbeTime": null, "lastTransitionTime": "2018-08-14T16:03:23Z", "status": "True", "type": "Ready" }, { "lastProbeTime": null, "lastTransitionTime": null, "status": "True", "type": "ContainersReady" }, { "lastProbeTime": null, "lastTransitionTime": "2018-08-14T16:02:23Z", "status": "True", "type": "PodScheduled" } ], "containerStatuses": [ { "containerID": "docker://b48888b60b81e25244091abfdfaca52102735b5398191929283374763072d020", "image": "brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhgs3/rhgs-server-rhel7:3.3.1-28", "imageID": "docker-pullable://brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhgs3/rhgs-server-rhel7@sha256:fab3497895ccccb17e62fefaefbf8fe3b4cc5b30e10cd7d6bb018754a19e3d51", "lastState": {}, "name": "glusterfs", "ready": true, "restartCount": 0, "state": { "running": { "startedAt": "2018-08-14T16:02:35Z" } } } ], "hostIP": "192.168.0.15", "phase": "Running", "podIP": "192.168.0.15", "qosClass": "Burstable", "startTime": "2018-08-14T16:02:23Z" } }, { "apiVersion": "v1", "kind": "Pod", "metadata": { "annotations": { "openshift.io/scc": "privileged" }, "creationTimestamp": "2018-08-14T16:02:23Z", "generateName": "glusterfs-storage-", "labels": { "controller-revision-hash": "3129677946", "glusterfs": "storage-pod", "glusterfs-node": "pod", "pod-template-generation": "1" }, "name": "glusterfs-storage-bjkbr", "namespace": "glusterfs", "ownerReferences": [ { "apiVersion": "apps/v1", "blockOwnerDeletion": true, "controller": true, "kind": "DaemonSet", "name": "glusterfs-storage", "uid": "6eacc84d-9fdb-11e8-a9a4-fa163e0c8137" } ], "resourceVersion": "2525", "selfLink": "/api/v1/namespaces/glusterfs/pods/glusterfs-storage-bjkbr", "uid": "6eafac36-9fdb-11e8-b9ef-fa163e1c2665" }, "spec": { "containers": [ { "env": [ { "name": "GB_GLFS_LRU_COUNT", "value": "15" }, { "name": "TCMU_LOGDIR", "value": "/var/log/glusterfs/gluster-block" }, { "name": "GB_LOGDIR", "value": "/var/log/glusterfs/gluster-block" } ], "image": "brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhgs3/rhgs-server-rhel7:3.3.1-28", "imagePullPolicy": "IfNotPresent", "livenessProbe": { "exec": { "command": [ "/bin/bash", "-c", "systemctl status glusterd.service" ] }, "failureThreshold": 50, "initialDelaySeconds": 40, "periodSeconds": 25, "successThreshold": 1, "timeoutSeconds": 3 }, "name": "glusterfs", "readinessProbe": { "exec": { "command": [ "/bin/bash", "-c", "systemctl status glusterd.service" ] }, "failureThreshold": 50, "initialDelaySeconds": 40, "periodSeconds": 25, "successThreshold": 1, "timeoutSeconds": 3 }, "resources": { "requests": { "cpu": "100m", "memory": "100Mi" } }, "securityContext": { "capabilities": {}, "privileged": true }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/var/lib/heketi", "name": "glusterfs-heketi" }, { "mountPath": "/run", "name": "glusterfs-run" }, { "mountPath": "/run/lvm", "name": "glusterfs-lvm" }, { "mountPath": "/etc/glusterfs", "name": "glusterfs-etc" }, { "mountPath": "/var/log/glusterfs", "name": "glusterfs-logs" }, { "mountPath": "/var/lib/glusterd", "name": "glusterfs-config" }, { "mountPath": "/dev", "name": "glusterfs-dev" }, { "mountPath": "/var/lib/misc/glusterfsd", "name": "glusterfs-misc" }, { "mountPath": "/sys/fs/cgroup", "name": "glusterfs-cgroup", "readOnly": true }, { "mountPath": "/etc/ssl", "name": "glusterfs-ssl", "readOnly": true }, { "mountPath": "/usr/lib/modules", "name": "kernel-modules", "readOnly": true }, { "mountPath": "/etc/target", "name": "glusterfs-target" }, { "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", "name": "default-token-hv78z", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "imagePullSecrets": [ { "name": "default-dockercfg-9qgs7" } ], "nodeName": "cns-1.scale-ci.example.com", "nodeSelector": { "glusterfs": "storage-host" }, "priority": 0, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "default", "serviceAccountName": "default", "terminationGracePeriodSeconds": 30, "tolerations": [ { "effect": "NoSchedule", "key": "node.kubernetes.io/memory-pressure", "operator": "Exists" }, { "effect": "NoExecute", "key": "node.kubernetes.io/not-ready", "operator": "Exists" }, { "effect": "NoExecute", "key": "node.kubernetes.io/unreachable", "operator": "Exists" }, { "effect": "NoSchedule", "key": "node.kubernetes.io/disk-pressure", "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/var/lib/heketi", "type": "" }, "name": "glusterfs-heketi" }, { "emptyDir": {}, "name": "glusterfs-run" }, { "hostPath": { "path": "/run/lvm", "type": "" }, "name": "glusterfs-lvm" }, { "hostPath": { "path": "/etc/glusterfs", "type": "" }, "name": "glusterfs-etc" }, { "hostPath": { "path": "/var/log/glusterfs", "type": "" }, "name": "glusterfs-logs" }, { "hostPath": { "path": "/var/lib/glusterd", "type": "" }, "name": "glusterfs-config" }, { "hostPath": { "path": "/dev", "type": "" }, "name": "glusterfs-dev" }, { "hostPath": { "path": "/var/lib/misc/glusterfsd", "type": "" }, "name": "glusterfs-misc" }, { "hostPath": { "path": "/sys/fs/cgroup", "type": "" }, "name": "glusterfs-cgroup" }, { "hostPath": { "path": "/etc/ssl", "type": "" }, "name": "glusterfs-ssl" }, { "hostPath": { "path": "/usr/lib/modules", "type": "" }, "name": "kernel-modules" }, { "hostPath": { "path": "/etc/target", "type": "" }, "name": "glusterfs-target" }, { "name": "default-token-hv78z", "secret": { "defaultMode": 420, "secretName": "default-token-hv78z" } } ] }, "status": { "conditions": [ { "lastProbeTime": null, "lastTransitionTime": "2018-08-14T16:02:23Z", "status": "True", "type": "Initialized" }, { "lastProbeTime": null, "lastTransitionTime": "2018-08-14T16:03:28Z", "status": "True", "type": "Ready" }, { "lastProbeTime": null, "lastTransitionTime": null, "status": "True", "type": "ContainersReady" }, { "lastProbeTime": null, "lastTransitionTime": "2018-08-14T16:02:23Z", "status": "True", "type": "PodScheduled" } ], "containerStatuses": [ { "containerID": "docker://2cf9df05d3cce42658adebbe8d1de67501e6b05c3463e3373dabadcc751bae04", "image": "brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhgs3/rhgs-server-rhel7:3.3.1-28", "imageID": "docker-pullable://brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhgs3/rhgs-server-rhel7@sha256:fab3497895ccccb17e62fefaefbf8fe3b4cc5b30e10cd7d6bb018754a19e3d51", "lastState": {}, "name": "glusterfs", "ready": true, "restartCount": 0, "state": { "running": { "startedAt": "2018-08-14T16:02:35Z" } } } ], "hostIP": "192.168.0.17", "phase": "Running", "podIP": "192.168.0.17", "qosClass": "Burstable", "startTime": "2018-08-14T16:02:23Z" } }, { "apiVersion": "v1", "kind": "Pod", "metadata": { "annotations": { "openshift.io/scc": "privileged" }, "creationTimestamp": "2018-08-14T16:02:23Z", "generateName": "glusterfs-storage-", "labels": { "controller-revision-hash": "3129677946", "glusterfs": "storage-pod", "glusterfs-node": "pod", "pod-template-generation": "1" }, "name": "glusterfs-storage-tslcd", "namespace": "glusterfs", "ownerReferences": [ { "apiVersion": "apps/v1", "blockOwnerDeletion": true, "controller": true, "kind": "DaemonSet", "name": "glusterfs-storage", "uid": "6eacc84d-9fdb-11e8-a9a4-fa163e0c8137" } ], "resourceVersion": "2523", "selfLink": "/api/v1/namespaces/glusterfs/pods/glusterfs-storage-tslcd", "uid": "6eb12d1d-9fdb-11e8-b9ef-fa163e1c2665" }, "spec": { "containers": [ { "env": [ { "name": "GB_GLFS_LRU_COUNT", "value": "15" }, { "name": "TCMU_LOGDIR", "value": "/var/log/glusterfs/gluster-block" }, { "name": "GB_LOGDIR", "value": "/var/log/glusterfs/gluster-block" } ], "image": "brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhgs3/rhgs-server-rhel7:3.3.1-28", "imagePullPolicy": "IfNotPresent", "livenessProbe": { "exec": { "command": [ "/bin/bash", "-c", "systemctl status glusterd.service" ] }, "failureThreshold": 50, "initialDelaySeconds": 40, "periodSeconds": 25, "successThreshold": 1, "timeoutSeconds": 3 }, "name": "glusterfs", "readinessProbe": { "exec": { "command": [ "/bin/bash", "-c", "systemctl status glusterd.service" ] }, "failureThreshold": 50, "initialDelaySeconds": 40, "periodSeconds": 25, "successThreshold": 1, "timeoutSeconds": 3 }, "resources": { "requests": { "cpu": "100m", "memory": "100Mi" } }, "securityContext": { "capabilities": {}, "privileged": true }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/var/lib/heketi", "name": "glusterfs-heketi" }, { "mountPath": "/run", "name": "glusterfs-run" }, { "mountPath": "/run/lvm", "name": "glusterfs-lvm" }, { "mountPath": "/etc/glusterfs", "name": "glusterfs-etc" }, { "mountPath": "/var/log/glusterfs", "name": "glusterfs-logs" }, { "mountPath": "/var/lib/glusterd", "name": "glusterfs-config" }, { "mountPath": "/dev", "name": "glusterfs-dev" }, { "mountPath": "/var/lib/misc/glusterfsd", "name": "glusterfs-misc" }, { "mountPath": "/sys/fs/cgroup", "name": "glusterfs-cgroup", "readOnly": true }, { "mountPath": "/etc/ssl", "name": "glusterfs-ssl", "readOnly": true }, { "mountPath": "/usr/lib/modules", "name": "kernel-modules", "readOnly": true }, { "mountPath": "/etc/target", "name": "glusterfs-target" }, { "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", "name": "default-token-hv78z", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "hostNetwork": true, "imagePullSecrets": [ { "name": "default-dockercfg-9qgs7" } ], "nodeName": "cns-2.scale-ci.example.com", "nodeSelector": { "glusterfs": "storage-host" }, "priority": 0, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "default", "serviceAccountName": "default", "terminationGracePeriodSeconds": 30, "tolerations": [ { "effect": "NoSchedule", "key": "node.kubernetes.io/memory-pressure", "operator": "Exists" }, { "effect": "NoSchedule", "key": "node.kubernetes.io/disk-pressure", "operator": "Exists" }, { "effect": "NoExecute", "key": "node.kubernetes.io/not-ready", "operator": "Exists" }, { "effect": "NoExecute", "key": "node.kubernetes.io/unreachable", "operator": "Exists" } ], "volumes": [ { "hostPath": { "path": "/var/lib/heketi", "type": "" }, "name": "glusterfs-heketi" }, { "emptyDir": {}, "name": "glusterfs-run" }, { "hostPath": { "path": "/run/lvm", "type": "" }, "name": "glusterfs-lvm" }, { "hostPath": { "path": "/etc/glusterfs", "type": "" }, "name": "glusterfs-etc" }, { "hostPath": { "path": "/var/log/glusterfs", "type": "" }, "name": "glusterfs-logs" }, { "hostPath": { "path": "/var/lib/glusterd", "type": "" }, "name": "glusterfs-config" }, { "hostPath": { "path": "/dev", "type": "" }, "name": "glusterfs-dev" }, { "hostPath": { "path": "/var/lib/misc/glusterfsd", "type": "" }, "name": "glusterfs-misc" }, { "hostPath": { "path": "/sys/fs/cgroup", "type": "" }, "name": "glusterfs-cgroup" }, { "hostPath": { "path": "/etc/ssl", "type": "" }, "name": "glusterfs-ssl" }, { "hostPath": { "path": "/usr/lib/modules", "type": "" }, "name": "kernel-modules" }, { "hostPath": { "path": "/etc/target", "type": "" }, "name": "glusterfs-target" }, { "name": "default-token-hv78z", "secret": { "defaultMode": 420, "secretName": "default-token-hv78z" } } ] }, "status": { "conditions": [ { "lastProbeTime": null, "lastTransitionTime": "2018-08-14T16:02:23Z", "status": "True", "type": "Initialized" }, { "lastProbeTime": null, "lastTransitionTime": "2018-08-14T16:03:27Z", "status": "True", "type": "Ready" }, { "lastProbeTime": null, "lastTransitionTime": null, "status": "True", "type": "ContainersReady" }, { "lastProbeTime": null, "lastTransitionTime": "2018-08-14T16:02:23Z", "status": "True", "type": "PodScheduled" } ], "containerStatuses": [ { "containerID": "docker://229097acde9ef73afa0716199b46056c57ed4b7bdce74d122e7f5e8bf5f68e65", "image": "brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhgs3/rhgs-server-rhel7:3.3.1-28", "imageID": "docker-pullable://brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhgs3/rhgs-server-rhel7@sha256:fab3497895ccccb17e62fefaefbf8fe3b4cc5b30e10cd7d6bb018754a19e3d51", "lastState": {}, "name": "glusterfs", "ready": true, "restartCount": 0, "state": { "running": { "startedAt": "2018-08-14T16:02:35Z" } } } ], "hostIP": "192.168.0.21", "phase": "Running", "podIP": "192.168.0.21", "qosClass": "Burstable", "startTime": "2018-08-14T16:02:23Z" } } ], "kind": "List", "metadata": { "resourceVersion": "", "selfLink": "" } } ], "returncode": 0 }, "state": "list" } I was asked to put some debug statements in the code before the wait task to print out the value of `glusterfs_count` (if defined) and the glusterfs_nodes TASK [openshift_storage_glusterfs : debug] ************************************* task path: /home/cloud-user/openshift-ansible/roles/openshift_storage_glusterfs/tasks/wait_for_pods.yml:2 Tuesday 14 August 2018 12:02:23 -0400 (0:00:01.438) 0:19:13.581 ******** skipping: [master-1.scale-ci.example.com] => {} TASK [openshift_storage_glusterfs : debug] ************************************* task path: /home/cloud-user/openshift-ansible/roles/openshift_storage_glusterfs/tasks/wait_for_pods.yml:6 Tuesday 14 August 2018 12:02:23 -0400 (0:00:00.054) 0:19:13.635 ******** ok: [master-1.scale-ci.example.com] => { "glusterfs_nodes": [ "cns-0.scale-ci.example.com", "cns-1.scale-ci.example.com", "cns-2.scale-ci.example.com" ] }