| Summary: | [platformmanagement_public_596]No pod is created when create a daemonset | ||
|---|---|---|---|
| Product: | OKD | Reporter: | DeShuai Ma <dma> |
| Component: | Pod | Assignee: | Paul Weil <pweil> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | DeShuai Ma <dma> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 3.x | CC: | aos-bugs, mmccomas |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2016-05-12 17:12:01 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
DeShuai Ma
2016-02-02 09:57:04 UTC
We're there any errors in the logs or events? I will retest again to try and reproduce. I was able to launch a daemonset in the vagrant environment and get pods. Please check for errors in the events or logs. Here is what I used to test:
[vagrant@openshiftdev daemonset]$ oc create -f simple-ds.json
daemonset "hello-daemonset" created
[vagrant@openshiftdev daemonset]$ oc describe ds hello-daemonset
Name: hello-daemonset
Image(s): openshift/hello-openshift
Selector:
Node-Selector: <none>
Labels: name=hello-daemonset
Desired Number of Nodes Scheduled: 1
Current Number of Nodes Scheduled: 1
Number of Nodes Misscheduled: 0
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
6s 6s 1 {daemon-set } Normal SuccessfulCreate Created pod: hello-daemonset-8nnz1
[vagrant@openshiftdev daemonset]$ oc get pods
NAME READY STATUS RESTARTS AGE
hello-daemonset-8nnz1 1/1 Running 0 13s
[vagrant@openshiftdev daemonset]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
da0377928396 openshift/hello-openshift "/hello-openshift" 14 seconds ago Up 13 seconds k8s_registry.9a2c9770_hello-daemonset-8nnz1_default_812deb2c-c9c6-11e5-aa2d-080027c5bfa9_cffdfb34
0d8a8f5982cf openshift/origin-pod:latest "/pod" 16 seconds ago Up 15 seconds k8s_POD.7ab2fe82_hello-daemonset-8nnz1_default_812deb2c-c9c6-11e5-aa2d-080027c5bfa9_f5b26443
[vagrant@openshiftdev daemonset]$ cat simple-ds.json
{
"kind": "DaemonSet",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "hello-daemonset"
},
"spec": {
"selector": {
"name": "hello-daemonset"
},
"template": {
"metadata": {
"labels": {
"name": "hello-daemonset"
}
},
"spec": {
"serviceAccountName": "default",
"containers": [
{
"name": "registry",
"image": "openshift/hello-openshift",
"ports": [
{
"containerPort": 80
}
]
}
]
}
}
}
}
On the latest env verify this bug. [root@ip-172-18-9-176 ~]# openshift version openshift v1.1.1-385-g2fa2261-dirty kubernetes v1.2.0-alpha.4-851-g4a65fa1 etcd 2.2.2 [root@ip-172-18-9-176 ~]# oc get node NAME LABELS STATUS AGE ip-172-18-9-176.ec2.internal daemon=yes,kubernetes.io/hostname=ip-172-18-9-176.ec2.internal Ready 54m [root@ip-172-18-9-176 ~]# oc get daemonset -n dma NAME CONTAINER(S) IMAGE(S) SELECTOR NODE-SELECTOR hello-daemonset registry openshift/hello-openshift <none> [root@ip-172-18-9-176 ~]# oc get pod -n dma NAME READY STATUS RESTARTS AGE hello-daemonset-8ayu1 1/1 Running 0 42m |