Bug 1889595 - Some of the application logs cannot be shown in the kibana console
Summary: Some of the application logs cannot be shown in the kibana console
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.4
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.7.z
Assignee: Vitalii Parfonov
QA Contact: Giriyamma
URL:
Whiteboard: logging-core
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-10-20 06:54 UTC by Meng Bo
Modified: 2022-06-15 10:06 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-06-15 10:06:05 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
kibana-logs (166.91 KB, image/png)
2020-10-20 06:54 UTC, Meng Bo
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-logging-operator pull 883 0 None closed Bug 1889595: Fix regex for correct formatting if log message contains 'stderr|stdout' 2021-02-19 14:08:44 UTC

Description Meng Bo 2020-10-20 06:54:54 UTC
Created attachment 1722777 [details]
kibana-logs

Description of problem:
On our clusters, we are seeing that the logs fetched via `oc logs <pod>` are different with what we can see from the kibana console.

Part of the application log cannot be shown in the console.

Attachment is screenshot for kibana console.

The following is application log:
$ oc logs assisted-service-5788d597db-dlbt8 | grep 'T08:34:53'
time="2020-10-19T08:34:53Z" level=error msg="Exit code is 255 " func="github.com/openshift/assisted-service/internal/bminventory.(*bareMetalInventory).PostStepReply" file="/go/src/github.com/openshift/origin/internal/bminventory/inventory.go:1537" cluster_id=f8574bc1-7a9b-466b-a87f-d3169e81d641 error="Received step reply dhcp-lease-allocate-409973f7 from cluster f8574bc1-7a9b-466b-a87f-d3169e81d641 host ad6d1703-bf35-6516-531a-9e610de1e3b7  exit-code 255 stdout  stderr dhclient existed with non-zero exit code 124: Internet Systems Consortium DHCP Client 4.3.6\nCopyright 2004-2017 Internet Systems Consortium.\nAll rights reserved.\nFor info, please visit <https://www.isc.org/software/dhcp/\n\nListening on LPF/api/00:1a:4a:51:ec:a0\nSending on   LPF/api/00:1a:4a:51:ec:a0\nSending on   Socket/fallback\nNot enough data in /etc/machine-id\nCreated duid \"\\000\\001\\000\\001' \\014(\\000\\032JQ\\354\\240\".\nDHCPDISCOVER on api to 255.255.255.255 port 67 interval 5 (xid=0x5f0a2850)\n>" go-id=406466655 host_id=ad6d1703-bf35-6516-531a-9e610de1e3b7 pkg=Inventory request_id=c7218a89-5884-4077-b236-2fb0901084c4


We can see that logs before "exit-code 255 stdout  stderr" cannot be seen from kibana console.


Version-Release number of selected component (if applicable):
OCP 4.5.11
cluster logging: 4.4.0-202009161309.p0
elasticsearch: 4.4.0-202009161309.p0

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:
Part of the application logs can be shown in kibana console.

Expected results:
Should show full logs as what we can see in application.

Additional info:

Comment 1 Jeff Cantrill 2020-10-20 13:51:03 UTC
Do you find the beginning part of the message in another record?  It may be there is a cap in the code that concats the message as spit out from the runtime

Comment 2 Meng Bo 2020-10-27 03:17:37 UTC
Hi Jeff,

Sorry for the late response.

I cannot find the missing part from anywhere.

For the latest example, I can see only 1 entry in kibana console with the timestamp which is the same with pod log


> From the pod log:
$ oc logs assisted-service-5788d597db-dlbt8 | grep "2020-10-27T02:38:31Z"
time="2020-10-27T02:38:31Z" level=error msg="Exit code is <125> " func="github.com/openshift/assisted-service/internal/bminventory.(*bareMetalInventory).PostStepReply" file="/go/src/github.com/openshift/origin/internal/bminventory/inventory.go:1537" cluster_id=0be3da79-1549-42e8-9801-a91658556f10 error="Received step reply <free-network-addresses-b9f3b5d1> from cluster <0be3da79-1549-42e8-9801-a91658556f10> host <a13c2742-7137-f709-dda3-6f3748c7e60b>  exit-code <125> stdout <> stderr <Error: error creating container storage: the container name \"free_addresses_scanner\" is already in use by \"ebf654859ed2a3b5de3803440a82e17334183d2708585e413fd76222f498eb07\". You have to remove that container to be able to reuse that name.: that name is already in use\n>" go-id=679234947 host_id=a13c2742-7137-f709-dda3-6f3748c7e60b pkg=Inventory request_id=b82147fe-aece-4d20-bc8d-796e6d3b1633


> The entry from kibana console:
{
  "_index": "project.assisted-installer-production.1eca5db0-0cc2-4e15-bbdf-9bdb587cba76.2020.10.27",
  "_type": "com.redhat.viaq.common",
  "_id": "YmVhZjAxMWEtOWQ4OC00NzlmLTk5M2QtZjNhMzVhYzg0ZjQ0",
  "_version": 1,
  "_score": null,
  "_source": {
    "docker": {
      "container_id": "ed44ebc083facee87bbb9d9b4b4e6bca481fd8da3162ca2cf367d303325eb391"
    },
    "kubernetes": {
      "container_name": "assisted-service",
      "namespace_name": "assisted-installer-production",
      "pod_name": "assisted-service-5788d597db-dlbt8",
      "container_image": "quay.io/app-sre/assisted-service:b85e2e9",
      "container_image_id": "quay.io/app-sre/assisted-service@sha256:90a1db3bc722880f03008f2756c1d40cea0b2565d7c418004466392356746e2b",
      "pod_id": "130e209f-d5ea-4fb1-ae99-fdbab62e5236",
      "host": "ip-10-119-137-29.ec2.internal",
      "labels": {
        "app": "assisted-service",
        "pod-template-hash": "5788d597db"
      },
      "master_url": "https://kubernetes.default.svc",
      "namespace_id": "1eca5db0-0cc2-4e15-bbdf-9bdb587cba76",
      "namespace_labels": {
        "name": "assisted-installer-production",
        "openshift_io/workload-monitoring": "true"
      }
    },
    "message": "<Error: error creating container storage: the container name \\\"free_addresses_scanner\\\" is already in use by \\\"ebf654859ed2a3b5de3803440a82e17334183d2708585e413fd76222f498eb07\\\". You have to remove that container to be able to reuse that name.: that name is already in use\\n>\" go-id=679234947 host_id=a13c2742-7137-f709-dda3-6f3748c7e60b pkg=Inventory request_id=b82147fe-aece-4d20-bc8d-796e6d3b1633",
    "level": "unknown",
    "hostname": "ip-10-119-137-29.ec2.internal",
    "pipeline_metadata": {
      "collector": {
        "ipaddr4": "10.119.137.29",
        "inputname": "fluent-plugin-systemd",
        "name": "fluentd",
        "received_at": "2020-10-27T02:38:31.676175+00:00",
        "version": "1.7.4 1.6.0"
      }
    },
    "@timestamp": "2020-10-27T02:38:31.159074+00:00",
    "viaq_msg_id": "YmVhZjAxMWEtOWQ4OC00NzlmLTk5M2QtZjNhMzVhYzg0ZjQ0"
  },
  "fields": {
    "@timestamp": [
      1603766311159
    ],
    "pipeline_metadata.collector.received_at": [
      1603766311676
    ]
  },
  "highlight": {
    "kubernetes.pod_name": [
      "@kibana-highlighted-field@assisted-service-5788d597db-dlbt8@/kibana-highlighted-field@"
    ]
  },
  "sort": [
    1603766311159
  ]
}

The length of the pod log is 889 and the length of the kibana message is 400
If it is related to the cap, do we have a way to modify the length in a single message in kibana?

And if it is split, I assume at least we should be able to see the full logs in multiple kibana entries.

Comment 3 Erez Alster 2020-11-01 07:44:54 UTC
Hi

Any updates about it?
Can we have a meeting to discuss this issue? we are not able to investigate issues on our production env.

Comment 4 Meng Bo 2020-12-09 06:05:11 UTC
Do we have any update for this? The customer is seeking for a fix of it.

Comment 5 Panagiotis Georgiadis 2021-02-15 13:05:22 UTC
I saw the PR got merged https://github.com/openshift/cluster-logging-operator/pull/883
is this bug considered now fixed so we get back to the customer?

Comment 6 Giriyamma 2021-02-24 05:38:43 UTC
Verified this fix with the PR, issue is fixed. Kibana shows full logs as what we can see in application.

Comment 7 Anping Li 2021-02-26 00:11:32 UTC
Move to verified as comment 6


Note You need to log in before you can comment on or make changes to this bug.