Bug 1303871 - [platformmanagement_public_596]No pod is created when create a daemonset
Summary: [platformmanagement_public_596]No pod is created when create a daemonset
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OKD
Classification: Red Hat
Component: Pod
Version: 3.x
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Paul Weil
QA Contact: DeShuai Ma
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-02-02 09:57 UTC by DeShuai Ma
Modified: 2016-05-12 17:12 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-05-12 17:12:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description DeShuai Ma 2016-02-02 09:57:04 UTC
Description of problem:
When create a daemonset, there is no any pod is created

Version-Release number of selected component (if applicable):
openshift v1.1.1-21-gbc1a879
kubernetes v1.1.0-origin-1107-g4c8e6f4
etcd 2.2.2

How reproducible:
Always

Steps to Reproduce:
1.Get node
[root@ip-172-18-5-142 fedora]# oc get node
NAME              LABELS                                              STATUS    AGE
ip-172-18-4-254   daemon=yes,kubernetes.io/hostname=ip-172-18-4-254   Ready     2h
ip-172-18-5-142   daemon=yes,kubernetes.io/hostname=ip-172-18-5-142   Ready     2h

2.Create a daemonset
$ oc create -f https://raw.githubusercontent.com/mdshuai/v3-testfiles/master/daemon/daemonset_node_selector.yaml
[root@ip-172-18-5-142 fedora]# oc get daemonset
NAME                       CONTAINER(S)   IMAGE(S)          SELECTOR                        NODE-SELECTOR
prometheus-node-exporter   c              prom/prometheus   name=prometheus-node-exporter   daemon=yes

3.Check the pod
$ oc get pod

Actual results:
3.There is no pod

Expected results:
3.Should create 2 pod no every node which label is "daemon=yes"

Additional info:

Comment 1 Paul Weil 2016-02-02 14:49:30 UTC
We're there any errors in the logs or events?  I will retest again to try and reproduce.

Comment 2 Paul Weil 2016-02-02 16:05:39 UTC
I was able to launch a daemonset in the vagrant environment and get pods.  Please check for errors in the events or logs.  Here is what I used to test:

[vagrant@openshiftdev daemonset]$ oc create -f simple-ds.json 
daemonset "hello-daemonset" created

[vagrant@openshiftdev daemonset]$ oc describe ds hello-daemonset
Name:		hello-daemonset
Image(s):	openshift/hello-openshift
Selector:	
Node-Selector:	<none>
Labels:		name=hello-daemonset
Desired Number of Nodes Scheduled: 1
Current Number of Nodes Scheduled: 1
Number of Nodes Misscheduled: 0
Pods Status:	1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Events:
  FirstSeen	LastSeen	Count	From		SubobjectPath	Type		Reason			Message
  ---------	--------	-----	----		-------------	--------	------			-------
  6s		6s		1	{daemon-set }			Normal		SuccessfulCreate	Created pod: hello-daemonset-8nnz1


[vagrant@openshiftdev daemonset]$ oc get pods
NAME                    READY     STATUS    RESTARTS   AGE
hello-daemonset-8nnz1   1/1       Running   0          13s


[vagrant@openshiftdev daemonset]$ docker ps
CONTAINER ID        IMAGE                         COMMAND              CREATED             STATUS              PORTS               NAMES
da0377928396        openshift/hello-openshift     "/hello-openshift"   14 seconds ago      Up 13 seconds                           k8s_registry.9a2c9770_hello-daemonset-8nnz1_default_812deb2c-c9c6-11e5-aa2d-080027c5bfa9_cffdfb34
0d8a8f5982cf        openshift/origin-pod:latest   "/pod"               16 seconds ago      Up 15 seconds                           k8s_POD.7ab2fe82_hello-daemonset-8nnz1_default_812deb2c-c9c6-11e5-aa2d-080027c5bfa9_f5b26443


[vagrant@openshiftdev daemonset]$ cat simple-ds.json 
{
      "kind": "DaemonSet",
      "apiVersion": "extensions/v1beta1",
      "metadata": {
        "name": "hello-daemonset"
      },
      "spec": {
        "selector": {
          "name": "hello-daemonset"
        },
        "template": {
          "metadata": {
            "labels": {
              "name": "hello-daemonset"
            }
          },
          "spec": {
            "serviceAccountName": "default",
            "containers": [
              {
                "name": "registry",
                "image": "openshift/hello-openshift",
                "ports": [
		  {
		    "containerPort": 80 
 		  }
		]
              }
            ]
          }
        }
      }
    }

Comment 3 DeShuai Ma 2016-02-03 10:06:13 UTC
On the latest env verify this bug.
[root@ip-172-18-9-176 ~]# openshift version
openshift v1.1.1-385-g2fa2261-dirty
kubernetes v1.2.0-alpha.4-851-g4a65fa1
etcd 2.2.2

[root@ip-172-18-9-176 ~]# oc get node
NAME                           LABELS                                                           STATUS    AGE
ip-172-18-9-176.ec2.internal   daemon=yes,kubernetes.io/hostname=ip-172-18-9-176.ec2.internal   Ready     54m
[root@ip-172-18-9-176 ~]# oc get daemonset -n dma
NAME              CONTAINER(S)   IMAGE(S)                    SELECTOR   NODE-SELECTOR
hello-daemonset   registry       openshift/hello-openshift              <none>
[root@ip-172-18-9-176 ~]# oc get pod -n dma
NAME                    READY     STATUS    RESTARTS   AGE
hello-daemonset-8ayu1   1/1       Running   0          42m


Note You need to log in before you can comment on or make changes to this bug.