Description of problem: Processing a template with labels results in pods that don't themselves have the labels. I see logs that this has not been the case in the past. Version-Release number of selected component (if applicable): openshift v3.1.1.0 kubernetes v1.1.0-origin-1107-g4c8e6f4 etcd 2.1.2 How reproducible: always Steps to Reproduce: > oc process -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/build/ruby20rhel7-template-sti.json -l redhat\=rocks | oc -f - create > oc create -f - --config=/home/avalon/workdir/fedora20-avalon/ose_akostadi.kubeconfig > service "frontend" created > route "route-edge" created > imagestream "origin-ruby-sample" created > imagestream "ruby-20-rhel7" created > buildconfig "ruby-sample-build" created > deploymentconfig "frontend" created > service "database" created > deploymentconfig "database" created > oc get services -l redhat\=rocks > NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE > database 172.30.221.137 <none> 5434/TCP name=database 4s > frontend 172.30.213.106 <none> 5432/TCP name=frontend 7s > oc get pods > NAME READY STATUS RESTARTS AGE > database-1-k6vl3 1/1 Running 0 5m > frontend-1-0karx 1/1 Running 0 1m > frontend-1-2zqb6 1/1 Running 0 49s > ruby-sample-build-1-build 0/1 Completed 0 6m > oc get pods -l redhat\=rocks > NAME READY STATUS RESTARTS AGE <nothing more> while expected would be: > NAME READY STATUS RESTARTS AGE > database-1-k6vl3 1/1 Running 0 5m > frontend-1-0karx 1/1 Running 0 1m > frontend-1-2zqb6 1/1 Running 0 49s
oc process will set any labels defined in the Template, on objects created from the template, which in this case would mean the DeploymentConfig. If the pods created by that deploymentconfig do not have labels that are on the DeploymentConfig itself, then i'm guessing that's due to a change in the Deployment behavior in terms of what labels it applies to the podtemplate. Dan is this behavior that has changed at some point?
Ben, Labels from the deploymentConfig itself don't propagate to pods managed by the resulting replicationController. Labels which need propagated to the replicationController pods should be set in deploymentConfig.spec.template.labels.
Should the `-l` process command option set `deploymentConfig.spec.template.labels` then?
Debatable. The template isn't creating the pod, so it should not necessarily be adding labels to pods. and from a management perspective, you can delete the RC (which does have the label) to delete the pod. i'll leave this open but it's a low priority RFE rather than a bug.
if something has changed, it's in the behavior of how deploymentconfigs propagate their labels (ie they used to propagate their own labels to their pods and now that is handled via the explicit spec.template.labels field instead), so handing off to dan, but i suspect this gets closed as working as designed.
This is working as designed (i.e. labels on the DC do not propagate to the RC). If there's a case to be made for propagating labels from the DC to the RC, please start a discussion in GitHub for a feature request.
*** Bug 1311945 has been marked as a duplicate of this bug. ***