Created attachment 1426308 [details] kibana screenshot Description of problem: In OpenShift 3.10 the master-api, master-controllers and etcd components run as static pods in the kube-system project. After installing logging 3.10.0-0.27.0 and implementing the workaround to get the fluentd daemonset running (see https://bugzilla.redhat.com/show_bug.cgi?id=1569106), logs are indexed for the pods in kube-system but they are not available in kibana. # oc exec -n openshift-logging -c elasticsearch $POD -- curl --connect-timeout 2 -s -k --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key https://logging-es:9200/_cat/indices?v health status index pri rep docs.count docs.deleted store.size pri.store.size green open .operations.2018.04.24 1 0 177631 0 155mb 155mb green open project.svt0.c500fdf2-47ee-11e8-ae99-022e0d5a511e.2018.04.24 1 0 1956 0 582kb 582kb green open project.svt4.c5a84f2c-47ee-11e8-ae99-022e0d5a511e.2018.04.24 1 0 1957 0 589.1kb 589.1kb green open .kibana 1 0 1 0 3.1kb 3.1kb green open project.svt1.c5283390-47ee-11e8-ae99-022e0d5a511e.2018.04.24 1 0 1957 0 566.2kb 566.2kb green open .kibana.3c767c41afb12ada140190ed82db3fd930e2efa3 1 0 8 0 77.6kb 77.6kb green open project.kube-system.7dce2381-47d8-11e8-9763-022e0d5a511e.2018.04.24 1 0 38351 0 14.9mb 14.9mb green open project.svt3.c5804343-47ee-11e8-ae99-022e0d5a511e.2018.04.24 1 0 1957 0 676.3kb 676.3kb green open project.svt2.c54d857d-47ee-11e8-ae99-022e0d5a511e.2018.04.24 1 0 1956 0 602.2kb 602.2kb green open .searchguard.logging-es-data-master-sxvdpq1q 1 0 5 0 33.8kb 33.8kb Running REST API searches against the index for known strings in the master logs returns good results. However, after logging in to kibana as a cluster-admin, the kibana index dropdown (see attached screenshot) does not show the kube-system project and searching .all returns no known good strings. Selecting the kubernetes.namespace_name stat widget in the menu shows other namespaces, but not kube-system. I believe cluster-admins should be able to see the master and etcd pod logs. Version-Release number of selected component (if applicable): logging 3.10.0-0.27.0 How reproducible: Always (2 for 2 for my installs, anyways) Steps to Reproduce: 1. Install an OpenShift 3.10 cluster 2. Deploy logging 3.10.0-0.27.0 (inventory below, adjust as appropriate) 3. Implement the fluentd workaround in bug 1569106 4. Use the ES REST API to verifiy kube-system index exists and is queryable 5. Log in to kibana as cluster-admin Actual results: kube-system project logs not available Expected results: cluster-admin can view pod logs for control plane pods Additional info: [OSEv3:children] masters etcd [masters] ip-172-31-2-228 [etcd] ip-172-31-2-228 [OSEv3:vars] deployment_type=openshift-enterprise openshift_deployment_type=openshift-enterprise openshift_release=v3.10 openshift_docker_additional_registries=registry.reg-aws.openshift.com openshift_logging_install_logging=true openshift_logging_master_url=https://ec2-54-191-21-206.us-west-2.compute.amazonaws.com:8443 openshift_logging_master_public_url=https://ec2-54-191-21-206.us-west-2.compute.amazonaws.com:8443 openshift_logging_kibana_hostname=kibana.apps.0424-rtb.qe.rhcloud.com openshift_logging_image_prefix=registry.reg-aws.openshift.com:443/openshift3/ openshift_logging_image_version=v3.10 openshift_logging_es_cluster_size=1 openshift_logging_es_pvc_dynamic=true openshift_logging_es_pvc_size=20Gi openshift_logging_es_pvc_storage_class_name=gp2 openshift_logging_fluentd_read_from_head=false openshift_logging_curator_nodeselector={"region": "infra"} openshift_logging_kibana_nodeselector={"region": "infra"} openshift_logging_es_nodeselector={"region": "infra"}
https://bugzilla.redhat.com/show_bug.cgi?id=1571190 proposes sending these logs to .operations which also takes care of this issue.
Mike can we close this in favor of: https://bugzilla.redhat.com/show_bug.cgi?id=1571190
Closing per comment 2 *** This bug has been marked as a duplicate of bug 1571190 ***