Bug 1822681
Summary: | 4.4: anonymous browsers should get a 403 from / | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Ben Parees <bparees> |
Component: | apiserver-auth | Assignee: | Venkata Siva Teja Areti <vareti> |
Status: | CLOSED DUPLICATE | QA Contact: | scheng |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 4.4 | CC: | aos-bugs, bparees, ccoleman, mfojtik, nagrawal, sanchezl, scheng, sttts, vareti, xxia |
Target Milestone: | --- | Keywords: | Reopened |
Target Release: | 4.4.z | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Release Note | |
Doc Text: |
In OpenShift release 4.1, an anonymous user could access discovery endpoints. Later releases revoked this access by removing unauthenticated subjects from cluster role bindings. Because of the way default policy resources are reconciled, unauthenticated access is preserved in upgrade clusters.
The ability to revoke this access after upgrading a cluster can be added to OpenShift. But, as doing that automatically would break existing use-cases, cluster administrators are given the ability to chose the best path forward based on their use case.
There could be up to five cluster role bindings in an OpenShift 4.6 cluster that give unauthenticated user access to discovery endpoints.
1. cluster-status-binding
2. discovery
3. system:basic-user
4. system:discovery
5. system:openshift:discovery
Cluster administrators can revoke unauthenticated access by using the below shell script. Please note that after running this snippet, an application that relied on unauthenticated behavior might start to receive HTTP 403 from the API Server.
## Snippet to remove unauthenticated group from all the cluster role bindings.
$ for clusterrolebinding in cluster-status-binding discovery system:basic-user system:discovery system:openshift:discovery ;
do
### Find the index of unauthenticated group in list of subjects
index=$(oc get clusterrolebinding ${clusterrolebinding} -o json | jq 'select(.subjects!=null) | .subjects | map(.name=="system:unauthenticated") | index(true)');
### Remove the element at index from subjects array
oc patch clusterrolebinding ${clusterrolebinding} --type=json --patch "[{'op': 'remove','path': '/subjects/$index'}]";
done
|
Story Points: | --- |
Clone Of: | 1821771 | Environment: | |
Last Closed: | 2020-09-17 17:53:47 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1878095, 1879650 | ||
Bug Blocks: |
Comment 1
Ben Parees
2020-04-09 15:35:47 UTC
(If we are confident we can ship this way, then the test needs to be fixed in 4.4 so we can get passing results) It also blocks a 4.1->4.2 and 4.2->4.3 upgrade. I don't see this as blocking. Shouldn't block but high priority to put the fix in to 4.4 to skip / tolerate this condition on older releases (we have a standard way to skip on skew that is alreayd being applied) This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale" and decreasing the severity/priority. If you have further information on the current state of the bug, please update it, otherwise this bug can be closed in about 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant. An enhancement is created to formulize how to fix this issue with minimal customer disruption. https://github.com/openshift/enhancements/pull/312 As a first step, it is going to be estimated how big the impact would be with the proposed change. Based on the data collected, the issue would be fixed in an appropriate way. Yes this still needs to be fixed, no one has asked me for any additional information or indicated they have resolved it. This is breaking our "4.1->4.2->4.3->4.4" upgrade test which needs to pass to ensure that customers who started their clusters as a 4.1 cluster can successfully upgrade to the latest release. https://search.apps.build01.ci.devcluster.openshift.com/?search=anonymous+browsers+should+get+a+403+from&maxAge=48h&context=2&type=bug%2Bjunit shows that is is still failing on the 4.1->4.2->4.3->4.4 job. https://deck-ci.apps.ci.l2s4.p1.openshiftapps.com/view/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade-4.1-to-4.2-to-4.3-to-4.4-nightly/85 This bug is actively worked on. I will revisit this after finishing up my current tasks. I am planning to resume the work this sprint. I am working on other high priority items. I will get to this bug next sprint. Closing this issue. Issue as such is not reproduced in CI in the latest runs. Whatever failures present in ci search are due to network or io issues. pretty sure this isn't showing up because we disabled the test. Do you have evidence the test has actually passed in recent CI runs? I was not aware that we disabled the test that is failing. I only looked at the CI search to see if there are still failures. Also, is 4.1 supported now? re-opening the bug. I actually couldn't find evidence the test is disabled, so i'm a little unclear what happened myself. Perhaps the "related test failure" comment was misleading? You'd have to check with Luis on that. Again, i'd probably go back to basics and see if the behavior (anonymous curl against / returns 200) is actually still happening, for a cluster that's been upgraded from 4.1 to 4.5. 4.1 itself isn't supported, but a customer who installed a cluster at 4.1 and has since upgraded to 4.5, is supported. So if there are bugs that manifest when a cluster is upgraded from 4.1 to 4.5(via 4.2/4.3/4.4), that bug still needs to be fixed(if we deem it sufficiently severe/impacting). Thanks for reply. In that case, I will keep this open for now and see what needs to happen with this bug. I am occupied with other priority items. Working on this bug will be re-evaluated in next sprint. *** This bug has been marked as a duplicate of bug 1880123 *** |