Bug 1412428 - ScheduledJob pod doesn't take over the label
Summary: ScheduledJob pod doesn't take over the label
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: openshift-controller-manager
Version: 3.3.0
Hardware: All
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Maciej Szulik
QA Contact: Chuan Yu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-01-12 02:05 UTC by Kenjiro Nakayama
Modified: 2017-03-08 18:26 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
This is not a bug.
Clone Of:
Environment:
Last Closed: 2017-02-16 21:07:42 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2867101 0 None None None 2017-01-17 05:24:54 UTC

Description Kenjiro Nakayama 2017-01-12 02:05:24 UTC
Description of problem:

- ScheduledJob pod doesn't take over the label

Version-Release number of selected component (if applicable):

- OCP 3.3

How reproducible:
Steps to Reproduce:
1. Create ScheduledJob as [1]. (oc create -f hello.yaml)
2. Check label in the pods. (oc get pod -o yaml)

Actual results:
- The pods don't have the label.

Expected results:
- The pods ran by scheduledjob take over the label. (in this case job=test)

Additional info:
- Without label, it is not possible to delete "Completed" pod and oc get pod showed tons of Completed pods[2].

[1]---
apiVersion: batch/v2alpha1
kind: ScheduledJob
metadata:
  name: hello
labels:
  job: test
spec:
  schedule: "* * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure

[2]---
[joe@knakayam-ose33-master1 ~]$ oc get pod
NAME                     READY     STATUS      RESTARTS   AGE
hello-1160671760-tahw1   0/1       Completed   0          6h
hello-1160737296-ywvge   0/1       Completed   0          3h
hello-1160802832-3ciew   0/1       Completed   0          23h
hello-1160868368-j7em6   0/1       Completed   0          23h
hello-1160933904-ryoi3   0/1       Completed   0          23h
hello-1160999440-lo5b2   0/1       Completed   0          23h
hello-1161064976-ncelk   0/1       Completed   0          23h
hello-1161130512-ogl33   0/1       Completed   0          23h
hello-1376088595-kvvy4   0/1       Completed   0          23h
....

Comment 1 Maciej Szulik 2017-01-16 14:08:57 UTC
This is not a bug, for one you should rather put label on a podTemplate spec to have it in pods created by the CronJob, the same applies to jobs, if you want some labels on jobs generated by CronJob set it on jobTemplate. 

Secondly, the removal will be fixed by actually implementing cascade removal of the jobs and pods generated by CronJob.

Comment 2 Chuan Yu 2017-01-17 04:41:50 UTC
Verified with latest OSE3.3 puddle:
# openshift version
openshift v3.3.1.9
kubernetes v1.3.0+52492b4
etcd 2.3.0+git

Have put label on the podTemplate spec, then the pod crated by Scheduledjob with label setting in the resource file[1].

[1]----
apiVersion: batch/v2alpha1
kind: ScheduledJob
metadata:
  name: hello
labels:
  job: test
spec:
  schedule: "* * * * *"
  jobTemplate:
    spec:
      template:
        metadata:
          labels:
            job: test
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure

Comment 3 Troy Dawson 2017-02-16 21:07:42 UTC
This bug was fixed with the latest OCP 3.3.1 that is already released.


Note You need to log in before you can comment on or make changes to this bug.