The install gather only uses the ID for filenames with podman, but shows the names in the filename for crictl. This adds names for all bootkube podman containers, and then ensures they are used in the logfile names. This change adds the name to the filename too to make it easier to find the log you want. We already do this for the info gathered from crictl. Current output looks like this: ``` 048ef60a5c98.inspect 19469a52f7ad.log 37295b9146d0.inspect 47a6928aec07.log 9f0336ed4472.inspect adc089be665d.log de3b81bae57f.inspect e87f59adff94.log 048ef60a5c98.log 28df02fb561a.inspect 37295b9146d0.log 7cf3eb0e9c83.inspect 9f0336ed4472.log afc6dbef0228.inspect de3b81bae57f.log efae86fe1adf.inspect 0eafc3264794.inspect 28df02fb561a.log 39c7564d80ee.inspect 7cf3eb0e9c83.log a3dbad8cdd94.inspect afc6dbef0228.log e7154648c31d.inspect efae86fe1adf.log 0eafc3264794.log 3108024d2973.inspect 39c7564d80ee.log 88c5ac23359d.inspect a3dbad8cdd94.log dbf2f389ffa8.inspect e7154648c31d.log fd51af06a0ea.inspect 19469a52f7ad.inspect 3108024d2973.log 47a6928aec07.inspect 88c5ac23359d.log adc089be665d.inspect dbf2f389ffa8.log e87f59adff94.inspect fd51af06a0ea.log ``` Whereas I'd expect the filenames to be ironic-api-fd51af06a0ea.log for example.
Installed nightly build 4.8.0-0.nightly-2021-04-30-201824 and collect installer gather log, and the filename of pod log/inspect on bootstrap only contains ID as below: 1147bda73d30.inspect 4b3ba7011823.log 6c8dc86ebb39.inspect 77c6fdbb09f4.log a99234c78713.inspect abb8c94e7759.log f2bdca453214.inspect 1147bda73d30.log 66a481837605.inspect 6c8dc86ebb39.log 7c2ff1bd4508.inspect a99234c78713.log b691707a06f8.inspect f2bdca453214.log 4b3ba7011823.inspect 66a481837605.log 77c6fdbb09f4.inspect 7c2ff1bd4508.log abb8c94e7759.inspect b691707a06f8.log Then checked on nightly build 4.8.0-0.nightly-2021-05-06-003426 with fix, filename of pod log/inspect contains both name and id, which is expected. cco-render-d2c430bd3c12.inspect config-render-0884df69e5ad.inspect etcd-render-c1a15209e340.inspect kube-apiserver-render-6822136fbe3e.inspect kube-scheduler-render-4569dd99f98c.inspect cco-render-d2c430bd3c12.log config-render-0884df69e5ad.log etcd-render-c1a15209e340.log kube-apiserver-render-6822136fbe3e.log kube-scheduler-render-4569dd99f98c.log cluster-bootstrap-f758cad23327.inspect cvo-render-8c4e3df6a4ba.inspect ingress-render-6b1fe9f4ddaa.inspect kube-controller-render-3791112c59c6.inspect mco-render-8863784e036f.inspect cluster-bootstrap-f758cad23327.log cvo-render-8c4e3df6a4ba.log ingress-render-6b1fe9f4ddaa.log kube-controller-render-3791112c59c6.log mco-render-8863784e036f.log
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438