Bug 1461501 - [RFE] add support for dumping logs from elasticsearch via a tool like elasticdump for sending to Red Hat Support
[RFE] add support for dumping logs from elasticsearch via a tool like elastic...
Status: NEW
Product: OpenShift Container Platform
Classification: Red Hat
Component: RFE (Show other bugs)
All All
unspecified Severity high
: ---
: 3.7.0
Assigned To: Jeff Cantrill
Xiaoli Tian
Depends On:
  Show dependency treegraph
Reported: 2017-06-14 11:33 EDT by Peter Portante
Modified: 2017-08-22 16:21 EDT (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Peter Portante 2017-06-14 11:33:44 EDT
We need to add support for dumping logs from elasticsearch via a tool like elasticdump for sending to Red Hat Support.

The tool will likely need to be plumbed in the logging-elasticsearch image, so that one can do:

1. oc rsh logging-es-#####
2. elasticdump command
3. oc rsync <elastic dump file>
4. upload to support case
Comment 2 Steven Walter 2017-06-14 16:07:43 EDT
We can create an elasticsearch dump pod with:

# oc run es-dump --image=taskrabbit/elasticsearch-dump -ti --command -- /bin/sh

From inside we can:

# mkdir /data
# elasticdump --input=http://<LOGGING-ES-SERVICE-IP>:9200/<SOMEINDEX> --output=$ | gzip > /data/<SOMEINDEX>.gz

This does *not* work in elasticsearch 2.x due to the requirement for TLS (such as the --cert and --key required in this command:  curl -s -k --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key https://logging-es:9200/_cat/indices?v). We seem to be able to move past this in elasticsearch 5, so getting up to es5 is a blocker for this issue, at least when using elasticsearch-dump.

Potential extra steps: mount a pv into /data/ in order to save dump persistently, otherwise requires copying data out.
Comment 3 Steven Walter 2017-06-14 16:49:25 EDT
Dump is working, but performance is an issue when building into a separate container (this is a point in favor of building it into the logging-es pod itself to avoid the extra SDN)
Comment 4 Steven Walter 2017-06-15 17:18:42 EDT
# oc run es-dump --image=taskrabbit/elasticsearch-dump -ti --command -- /bin/sh

# oc rsh es-dump-2-lwvb4

Elastic Dump the logs from a specific service/index  to a .gz archive

# elasticdump --input= --output=$ | gzip > /tmp/logstash-2017.06.14.json.gz

Pull the .gz archive from the elastic dump pod to the local workstation
# oc rsync es-dump-1-2p5x8:/tmp/ .

Upload the archive file to "Red Hat Support".
"Support" then downloads the .gz and unpacks the json file within.

View the logs with Kibana on support's laptop
# cd /home/example/
# gunzip logstash-2017.06.14.gz
# elasticdump --input=/home/example/logstash-2017.06.14.json --output=http://localhost:9200/miqhackfest-logindex
This works. We can consider building elasticdump into the existing image, if not we need to see if we can either package our own version of it or mimic its functionality.

Ideally this could even be a template with a pvc, to avoid the customer needing to rsync.

Everything after this is created can be handled by support.

Note You need to log in before you can comment on or make changes to this bug.