We need to add support for dumping logs from elasticsearch via a tool like elasticdump for sending to Red Hat Support. The tool will likely need to be plumbed in the logging-elasticsearch image, so that one can do: 1. oc rsh logging-es-##### 2. elasticdump command 3. oc rsync <elastic dump file> 4. upload to support case
We can create an elasticsearch dump pod with: # oc run es-dump --image=taskrabbit/elasticsearch-dump -ti --command -- /bin/sh From inside we can: # mkdir /data # elasticdump --input=http://<LOGGING-ES-SERVICE-IP>:9200/<SOMEINDEX> --output=$ | gzip > /data/<SOMEINDEX>.gz This does *not* work in elasticsearch 2.x due to the requirement for TLS (such as the --cert and --key required in this command: curl -s -k --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key https://logging-es:9200/_cat/indices?v). We seem to be able to move past this in elasticsearch 5, so getting up to es5 is a blocker for this issue, at least when using elasticsearch-dump. Potential extra steps: mount a pv into /data/ in order to save dump persistently, otherwise requires copying data out.
Dump is working, but performance is an issue when building into a separate container (this is a point in favor of building it into the logging-es pod itself to avoid the extra SDN)
# oc run es-dump --image=taskrabbit/elasticsearch-dump -ti --command -- /bin/sh # oc rsh es-dump-2-lwvb4 Elastic Dump the logs from a specific service/index to a .gz archive # elasticdump --input=http://172.22.194.247:9200/logstash-2017.06.14 --output=$ | gzip > /tmp/logstash-2017.06.14.json.gz Pull the .gz archive from the elastic dump pod to the local workstation # oc rsync es-dump-1-2p5x8:/tmp/ . Upload the archive file to "Red Hat Support". --------------------------------------------------------------------------- "Support" then downloads the .gz and unpacks the json file within. View the logs with Kibana on support's laptop # cd /home/example/ # gunzip logstash-2017.06.14.gz # elasticdump --input=/home/example/logstash-2017.06.14.json --output=http://localhost:9200/miqhackfest-logindex This works. We can consider building elasticdump into the existing image, if not we need to see if we can either package our own version of it or mimic its functionality. Ideally this could even be a template with a pvc, to avoid the customer needing to rsync. Everything after this is created can be handled by support.
With the introduction of OpenShift 4, Red Hat has delivered or roadmapped a substantial number of features based on feedback by our customers. Many of the enhancements encompass specific RFEs which have been requested, or deliver a comparable solution to a customer problem, rendering an RFE redundant. This bz (RFE) has been identified as a feature request not yet planned or scheduled for an OpenShift release and is being closed. If this feature is still an active request that needs to be tracked, Red Hat Support can assist in filing a request in the new JIRA RFE system, as well as provide you with updates as the RFE progress within our planning processes. Please open a new support case: https://access.redhat.com/support/cases/#/case/new Opening a New Support Case: https://access.redhat.com/support/cases/#/case/new As the new Jira RFE system is not yet public, Red Hat Support can help answer your questions about your RFEs via the same support case system.