Bug 1293973 - label not passed on to pods from template
label not passed on to pods from template
Status: CLOSED NOTABUG
Product: OpenShift Origin
Classification: Red Hat
Component: Deployments (Show other bugs)
3.x
Unspecified Unspecified
unspecified Severity low
: ---
: ---
Assigned To: Dan Mace
zhou ying
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-12-23 15:10 EST by Aleksandar Kostadinov
Modified: 2016-02-25 13:33 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-01-29 08:24:02 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Aleksandar Kostadinov 2015-12-23 15:10:29 EST
Description of problem:
Processing a template with labels results in pods that don't themselves have the labels. I see logs that this has not been the case in the past.

Version-Release number of selected component (if applicable):
openshift v3.1.1.0
kubernetes v1.1.0-origin-1107-g4c8e6f4
etcd 2.1.2

How reproducible:
always

Steps to Reproduce:
> oc process -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/build/ruby20rhel7-template-sti.json -l redhat\=rocks | oc -f - create

> oc create -f - --config=/home/avalon/workdir/fedora20-avalon/ose_akostadi@redhat.com.kubeconfig
>      service "frontend" created
>      route "route-edge" created
>      imagestream "origin-ruby-sample" created
>      imagestream "ruby-20-rhel7" created
>      buildconfig "ruby-sample-build" created
>      deploymentconfig "frontend" created
>      service "database" created
>      deploymentconfig "database" created

> oc get services -l redhat\=rocks
>      NAME       CLUSTER_IP       EXTERNAL_IP   PORT(S)    SELECTOR        AGE
>      database   172.30.221.137   <none>        5434/TCP   name=database   4s
>      frontend   172.30.213.106   <none>        5432/TCP   name=frontend   7s

> oc get pods
> NAME                        READY     STATUS      RESTARTS   AGE
> database-1-k6vl3            1/1       Running     0          5m
> frontend-1-0karx            1/1       Running     0          1m
> frontend-1-2zqb6            1/1       Running     0          49s
> ruby-sample-build-1-build   0/1       Completed   0          6m

> oc get pods -l redhat\=rocks
> NAME                        READY     STATUS      RESTARTS   AGE
<nothing more>

while expected would be:
> NAME                        READY     STATUS      RESTARTS   AGE
> database-1-k6vl3            1/1       Running     0          5m
> frontend-1-0karx            1/1       Running     0          1m
> frontend-1-2zqb6            1/1       Running     0          49s
Comment 2 Ben Parees 2016-01-01 16:15:10 EST
oc process will set any labels defined in the Template, on objects created from the template, which in this case would mean the DeploymentConfig.

If the pods created by that deploymentconfig do not have labels that are on the DeploymentConfig itself, then i'm guessing that's due to a change in the Deployment behavior in terms of what labels it applies to the podtemplate.

Dan is this behavior that has changed at some point?
Comment 3 Dan Mace 2016-01-04 10:05:17 EST
Ben,

Labels from the deploymentConfig itself don't propagate to pods managed by the resulting replicationController. Labels which need propagated to the replicationController pods should be set in deploymentConfig.spec.template.labels.
Comment 4 Aleksandar Kostadinov 2016-01-04 10:10:44 EST
Should the `-l` process command option set `deploymentConfig.spec.template.labels` then?
Comment 5 Ben Parees 2016-01-04 13:30:54 EST
Debatable.  The template isn't creating the pod, so it should not necessarily be adding labels to pods.

and from a management perspective, you can delete the RC (which does have the label) to delete the pod.

i'll leave this open but it's a low priority RFE rather than a bug.
Comment 7 Ben Parees 2016-01-28 21:19:38 EST
if something has changed, it's in the behavior of how deploymentconfigs propagate their labels (ie they used to propagate their own labels to their pods and now that is handled via the explicit spec.template.labels field instead), so handing off to dan, but i suspect this gets closed as working as designed.
Comment 8 Dan Mace 2016-01-29 08:24:02 EST
This is working as designed (i.e. labels on the DC do not propagate to the RC). If there's a case to be made for propagating labels from the DC to the RC, please start a discussion in GitHub for a feature request.
Comment 9 Ben Parees 2016-02-25 13:33:51 EST
*** Bug 1311945 has been marked as a duplicate of this bug. ***

Note You need to log in before you can comment on or make changes to this bug.