Description of problem: The elasticsearch jobs rollover,delete for app infra audit fails with the following error: ~~~ {"error":{"root_cause":[{"type":"security_exception","reason":"no permissions for [indices:admin/aliases/get] and User [name=system:serviceaccount:openshift-logging:elasticsearch, roles=[admin_reader], requestedTenant=null]"}],"type":"security_exception","reason":"no permissions for [indices:admin/aliases/get] and User [name=system:serviceaccount:openshift-logging:elasticsearch, roles=[admin_reader], requestedTenant=null]"},"status":403} Error while attemping to determine the active write alias: {"error":{"root_cause":[{"type":"security_exception","reason":"no permissions for [indices:admin/aliases/get] and User [name=system:serviceaccount:openshift-logging:elasticsearch, roles=[admin_reader], requestedTenant=null]"}],"type":"security_exception","reason":"no permissions for [indices:admin/aliases/get] and User [name=system:serviceaccount:openshift-logging:elasticsearch, roles=[admin_reader], requestedTenant=null]"},"status":403} ~~~ Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Deploy Cluster Logging with EO image elasticsearch-operator.4.5.0-202012120433.p0 and cluster-logging-operator-v4.5.0-202012120433.p0 2.The jobs should fail after installation of Cluster Logging with the above described error. 3. Actual results: Jobs get into error state Expected results: Jobs should be successful. Additional info:
@sau, could you provide the must-gather for the cluster https://github.com/openshift/cluster-logging-operator/tree/master/must-gather#usage? Thanks.
Hi Any update on this BZ ? Thanks Anand
hmm. I dont know if there any limits on google drive. Can you directly access the must gather from the case then ?
Hui Kang, Can you also check the case 02828704 that has the same symptoms. ? Thanks Anand
@jcantril My customer seems to have upgraded to 4.6.17 and they are seeing this issue there too. Do you know if the issue impacted 4.6.17 and are you planning to port the fix to 4.6.z ? Thanks Anand
Verified with elasticsearch-operator.4.5.0-202103060503.p0
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.5.35 extras update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:0784
Hi My customer seems to have upgraded to 4.6.17 and they are seeing this issue there too. Do you know if the issue impacted 4.6.17 and are you planning to port the fix to 4.6.z ? Thanks Anand
We are seeing the error OCP 4.6.20 cluster with - CLO version 4.6.0-202103202154.p0 - Elasticsearch operator version 4.6.0-202103130248.p0 {"error":{"root_cause":[{"type":"security_exception","reason":"no permissions for [indices:admin/aliases/get] and User [name=system:serviceaccount:openshift-logging:elasticsearch, roles=[admin_reader], requestedTenant=null]"}],"type":"security_exception","reason":"no permissions for [indices:admin/aliases/get] and User [name=system:serviceaccount:openshift-logging:elasticsearch, roles=[admin_reader], requestedTenant=null]"},"status":403} Error while attemping to determine the active write alias: {"error":{"root_cause":[{"type":"security_exception","reason":"no permissions for [indices:admin/aliases/get] and User [name=system:serviceaccount:openshift-logging:elasticsearch, roles=[admin_reader], requestedTenant=null]"}],"type":"security_exception","reason":"no permissions for [indices:admin/aliases/get] and User [name=system:serviceaccount:openshift-logging:elasticsearch, roles=[admin_reader], requestedTenant=null]"},"status":403}
Same problem with clean install of: cluster-logging.5.0.2-18 Red Hat OpenShift Logging 5.0.2-18 Succeeded elasticsearch-operator.5.0.2-18 OpenShift Elasticsearch Operator 5.0.2-18 Succeeded [2021-04-21T19:52:00,904][ERROR][c.a.o.s.a.BackendRegistry] [elasticsearch-cdm-fzzbb5dp-1] Cannot authenticate user because admin user is not permitted to login via HTTP [2021-04-21T19:52:01,005][INFO ][c.a.o.s.p.PrivilegesEvaluator] [elasticsearch-cdm-fzzbb5dp-1] No cluster-level perm match for User [name=system:serviceaccount:openshift-operators-redhat:elasticsearch-operator, roles=[admin_reader], requestedTenant=null] Resolved [aliases=[*], indices=[*], allIndices=[*], types=[*], originalRequested=[], remoteIndices=[]] [Action [indices:admin/template/get]] [RolesChecked [admin_user]] [2021-04-21T19:52:01,005][INFO ][c.a.o.s.p.PrivilegesEvaluator] [elasticsearch-cdm-fzzbb5dp-1] No permissions for [indices:admin/template/get]
Cluster version - 4.6.21 CLO version - 4.6.0-202106181629 ESO version - 4.6.0-202106181629 The index rollover jobs use service account 'system:serviceaccount:openshift-logging:elasticsearch'. Therefore, I added it to 'sgconfig/roles_mapping.yml' as follows. sg_role_admin: users: - 'CN=system.admin,OU=OpenShift,O=Logging' - 'system:serviceaccount:openshift-logging:elasticsearch' backendroles: - 'elasticsearch-operator' Then, ran the 'es_seed_acl' to update search guard permissions. This has to be done on all ES pods. These changes are not persistent. If the pod gets recreated, need to execute the above steps.
Hi everyone, you are addressing an issue for the 4.6.z on a closed BZ for 4.5.z. I accidentally happened to check this out. I am advising you to follow-up on the 4.6.z advisories for a similar BZ next time. This is the appropriate way to track down if and when something got fixed. For example in your case: - The appropriate BZ is: https://bugzilla.redhat.com/show_bug.cgi?id=1929688 - The fix is shipped with: 4.6.23 - The advisory for 4.6.23 telling you that is: https://errata.devel.redhat.com/advisory/70859