More info: 1. I meant to log in the same fluentd container with "docker exec" command from docker backend in #1 of "Additional info:" 2. The other logging pods are all accessible by "oc rsh" 3. I already added this line in scc/privileged: - system:serviceaccount:logging:aggregated-logging-fluentd 4. This test is done with cluster-admin role user
Eric created https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter/issues/16 @xia can you test with fluentd K8S_HOST_URL=https://kubernetes.default.svc.cluster.local/ ?
@lmeyer Adding K8S_HOST_URL=https://kubernetes.default.svc.cluster.local/ did not enable me to shell into fluentd pod. I added the project admin user name into scc/privileged and then I was able to oc rsh into fluentd pod. I'm not quiet sure about how the ability to shell into a pod related with these rules in privileged scc: oc edit scc/privileged allowEmptyDirVolumePlugin: true allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegedContainer: true allowedCapabilities: null Seems we can add these info into the logging deployment doc to inform end user how should they do to enable themselves to shell into fluentd pod.
@ewolinet @lmeyer Sorry that I misunderstand you in my previous comment. Tested with K8S_HOST_URL=https://kubernetes.default.svc.cluster.local in fluentd deamonset and the error message "Connection refused" and error stacks in fluentd pod log disappeared.
Verified with latest images built from logging upstream. Closing as fixed.