Bug 1421264 - oc logs dc/<DC_NAME> -c <CONTAINER> errors stating you need to specify a container name for the pod
Summary: oc logs dc/<DC_NAME> -c <CONTAINER> errors stating you need to specify a cont...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: openshift-controller-manager
Version: 3.3.0
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: ---
Assignee: Michal Fojtik
QA Contact: zhou ying
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-10 19:07 UTC by Eric Jones
Modified: 2019-08-23 12:51 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-08-23 12:51:37 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Eric Jones 2017-02-10 19:07:25 UTC
Description of problem:
oc logs dc/<DC_NAME> should apparently only collect the logs from the deployer pod that was used to deploy that application, but currently it collects the logs from the active pod associated with that DC.

Version-Release number of selected component (if applicable):
> oc version
oc v3.3.0.34
kubernetes v.3.0+52492b4
openshift v3.3.1.3

Steps to Reproduce:
1. deploy an application that runs two containers
2. oc logs dc/<DC_NAME> -c <CONTAINER>

Actual results:
$ oc logs dc/myconfig -c mycontainer
Error from server: a container name must be specified for pod myconfig-1-abcde, choose one of: [mycontainer mysecondcontainer]

Expected results:
Provides logs from the container in the pod

Comment 1 Paul Weil 2017-02-10 20:06:52 UTC
History:

Existing GH issue: 
   https://github.com/openshift/origin/issues/10186
Initial attempt at fix in 1.3: 
   https://github.com/openshift/origin/pull/10377
Rollback due to backwards compat:
   https://github.com/openshift/origin/issues/10598
Rollback PR for 1.3:
   https://github.com/openshift/origin/pull/10609

Current status is that oc logs dc/with-2-containers will fail trying to retrieve logs for the running, deployed pod with multiple containers and give "Error from server (BadRequest): a container name must be specified for pod hello-1-35kpz, choose one of: [hello hello2]" due to not passing the container name for a multi-container.


Reproducer DC:

{
    "apiVersion": "v1",
    "kind": "DeploymentConfig",
    "metadata": {
        "creationTimestamp": "2017-02-10T19:46:50Z",
        "generation": 2,
        "labels": {
            "run": "hello"
        },
        "name": "hello",
        "namespace": "default",
        "resourceVersion": "1498",
        "selfLink": "/oapi/v1/namespaces/default/deploymentconfigs/hello",
        "uid": "aa55118f-efc9-11e6-826b-54ee752009cb"
    },
    "spec": {
        "replicas": 1,
        "selector": {
            "run": "hello"
        },
        "strategy": {
            "activeDeadlineSeconds": 21600,
            "resources": {},
            "rollingParams": {
                "intervalSeconds": 1,
                "maxSurge": "25%",
                "maxUnavailable": "25%",
                "timeoutSeconds": 600,
                "updatePeriodSeconds": 1
            },
            "type": "Rolling"
        },
        "template": {
            "metadata": {
                "creationTimestamp": null,
                "labels": {
                    "run": "hello"
                }
            },
            "spec": {
                "containers": [
                    {
                        "args": [
                            "/bin/sh",
                            "-c",
                            "while true; do echo 1; sleep 1; done"
                        ],
                        "image": "busybox:latest",
                        "imagePullPolicy": "Always",
                        "name": "hello",
                        "resources": {},
                        "terminationMessagePath": "/dev/termination-log"
                    },
                    {
                        "args": [
                            "/bin/sh",
                            "-c",
                            "while true; do echo 2; sleep 1; done"
                        ],
                        "image": "busybox:latest",
                        "imagePullPolicy": "Always",
                        "name": "hello2",
                        "resources": {},
                        "terminationMessagePath": "/dev/termination-log"
                    }
                ],
                "dnsPolicy": "ClusterFirst",
                "restartPolicy": "Always",
                "securityContext": {},
                "terminationGracePeriodSeconds": 30
            }
        },
        "test": false,
        "triggers": [
            {
                "type": "ConfigChange"
            }
        ]
    }
}

Comment 2 Michal Fojtik 2017-02-13 13:58:52 UTC
Upstream seems to moving toward a solution that the oc logs will always pick the "first" container instead of error.

I don't think we will get that (for upstream deployments) for 1.5, but I believe we will pick this fix up for 1.6.

I would rather wait to have consistent behavior for retrieving log for deployments, cronjobs, replicasets, etc...

Comment 5 Eric Jones 2018-05-15 14:46:01 UTC
Testing this on various versions, I can see that the issue still exists in 3.6 but has been fixed in 3.9.

Currently spinning up a 3.7 cluster to test there as well.

Comment 6 Eric Jones 2018-05-15 19:08:51 UTC
It appears this is an issue in 3.7 as well.

Meaning the fix likely made it into 1.8 or 1.9

As this bug was opened against ocp 3.3 (i.e. kube 1.3) is there any hope of getting the fix backported to any previous version?

Comment 7 Michal Fojtik 2019-08-23 12:51:37 UTC
I'm pretty sure this is already working on master and we won't backport to 3.7.


Note You need to log in before you can comment on or make changes to this bug.