Description of problem: oc logs dc/<DC_NAME> should apparently only collect the logs from the deployer pod that was used to deploy that application, but currently it collects the logs from the active pod associated with that DC. Version-Release number of selected component (if applicable): > oc version oc v3.3.0.34 kubernetes v.3.0+52492b4 openshift v3.3.1.3 Steps to Reproduce: 1. deploy an application that runs two containers 2. oc logs dc/<DC_NAME> -c <CONTAINER> Actual results: $ oc logs dc/myconfig -c mycontainer Error from server: a container name must be specified for pod myconfig-1-abcde, choose one of: [mycontainer mysecondcontainer] Expected results: Provides logs from the container in the pod
History: Existing GH issue: https://github.com/openshift/origin/issues/10186 Initial attempt at fix in 1.3: https://github.com/openshift/origin/pull/10377 Rollback due to backwards compat: https://github.com/openshift/origin/issues/10598 Rollback PR for 1.3: https://github.com/openshift/origin/pull/10609 Current status is that oc logs dc/with-2-containers will fail trying to retrieve logs for the running, deployed pod with multiple containers and give "Error from server (BadRequest): a container name must be specified for pod hello-1-35kpz, choose one of: [hello hello2]" due to not passing the container name for a multi-container. Reproducer DC: { "apiVersion": "v1", "kind": "DeploymentConfig", "metadata": { "creationTimestamp": "2017-02-10T19:46:50Z", "generation": 2, "labels": { "run": "hello" }, "name": "hello", "namespace": "default", "resourceVersion": "1498", "selfLink": "/oapi/v1/namespaces/default/deploymentconfigs/hello", "uid": "aa55118f-efc9-11e6-826b-54ee752009cb" }, "spec": { "replicas": 1, "selector": { "run": "hello" }, "strategy": { "activeDeadlineSeconds": 21600, "resources": {}, "rollingParams": { "intervalSeconds": 1, "maxSurge": "25%", "maxUnavailable": "25%", "timeoutSeconds": 600, "updatePeriodSeconds": 1 }, "type": "Rolling" }, "template": { "metadata": { "creationTimestamp": null, "labels": { "run": "hello" } }, "spec": { "containers": [ { "args": [ "/bin/sh", "-c", "while true; do echo 1; sleep 1; done" ], "image": "busybox:latest", "imagePullPolicy": "Always", "name": "hello", "resources": {}, "terminationMessagePath": "/dev/termination-log" }, { "args": [ "/bin/sh", "-c", "while true; do echo 2; sleep 1; done" ], "image": "busybox:latest", "imagePullPolicy": "Always", "name": "hello2", "resources": {}, "terminationMessagePath": "/dev/termination-log" } ], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "securityContext": {}, "terminationGracePeriodSeconds": 30 } }, "test": false, "triggers": [ { "type": "ConfigChange" } ] } }
Upstream seems to moving toward a solution that the oc logs will always pick the "first" container instead of error. I don't think we will get that (for upstream deployments) for 1.5, but I believe we will pick this fix up for 1.6. I would rather wait to have consistent behavior for retrieving log for deployments, cronjobs, replicasets, etc...
Testing this on various versions, I can see that the issue still exists in 3.6 but has been fixed in 3.9. Currently spinning up a 3.7 cluster to test there as well.
It appears this is an issue in 3.7 as well. Meaning the fix likely made it into 1.8 or 1.9 As this bug was opened against ocp 3.3 (i.e. kube 1.3) is there any hope of getting the fix backported to any previous version?
I'm pretty sure this is already working on master and we won't backport to 3.7.