Description of problem: It has been decided that `oc version` will only return k8s version unlike in 3.x series. `oc get clusterversion` was suggested as a replacement for that functionality. That only works for cluster admin though which is not very useful for the users. > ➜ ~ oc whoami > testuser-0 > ➜ ~ oc get clusterversion > Error from server (Forbidden): clusterversions.config.openshift.io is forbidden: User "testuser-0" cannot list resource "clusterversions" in API group "config.openshift.io" at the cluster scope It would be especially needed for REST API users and tool writers (e.g. maven/ansible/etc.) where the tools would be expected to work with different OpenShift versions but no way for the tool to figure out what that version is. I suggest to make the cluster version visible anonymously like the k8s version is. Or at least let authenticated user be able to obtain it. Version-Release number of selected component (if applicable): 4.0.0-0.nightly-2019-03-23-222829 How reproducible: always Steps to Reproduce: 1. oc get clusterversion Actual results: Forbidden Expected results: 4.0.0-0.nightly-2019-03-23-222829
> It would be especially needed for REST API users and tool writers (e.g. maven/ansible/etc.) where the tools would be expected to work with different OpenShift versions but no way for the tool to figure out what that version is. For this case, why is the API server version not enough for these users?
It might be. Please tell me how is that obtained so we can check. Thank you.
Michal, > > It would be especially needed for REST API users and tool writers (e.g. maven/ansible/etc.) where the tools would be expected to work with different OpenShift versions but no way for the tool to figure out what that version is. > > For this case, why is the API server version not enough for these users? If you mean the approach in https://bugzilla.redhat.com/show_bug.cgi?id=1658957#c16 ? These all need admin right. It's more frendly that regular user could get openshift version but not kube version, since we are openshift.
It seems like useful info to have a version number.
This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale". If you have further information on the current state of the bug, please update it, otherwise this bug will be automatically closed in 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant.
Nothing changed on my end. Situation is exactly the same as before and we need to use cluster admin access to get version.
*** This bug has been marked as a duplicate of bug 1826750 ***
Maciej, original component I have set for the issue was `openshift-apiserver`. Then Stefan Schimanski canged to `oc` and then you closed issue as not a bug of `oc`. This is a little bit unfair for the bug :) In either case, if issue has a wrong component assigned for whatever reason, it needs to be edited and proper component set, not just closed. Often times user or QE is not sure about the proper component. It is expected that component owner needs to double check proper component has been selected and reroute (where the other component owner will double check if this was the correct selection). This until proper component is found. I'm reopening now and I'm setting the component to be cluster version operator. I'm thinking that this operator can expose an anonymous or regular user accessible API endpoint. If component owner believes this should be a task for another component, please re-route.
> It would be especially needed for REST API users and tool writers (e.g. maven/ansible/etc.) where the tools would be expected to work with different OpenShift versions but no way for the tool to figure out what that version is. Are you interested in... the RHCOS version? The OpenShift version strings are basically pulled out of thin air, and using them to judge behavior of underlying components (like RHCOS) seems... at least imprecise. Can you link us to the code where you plan to consume the version information? > I'm thinking that this operator can expose an anonymous or regular user accessible API endpoint. If regular auth is acceptable, and you're just looking for something attached to the cluster that tends to get bumped with product minor versions (again, this is going to be pretty imprecise), 'oc version' hits https://$API/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver with regular-user auth to get the API-server version.
Also, moving to 4.6. This should not block 4.5 unless there's a clearer story about why it is that important.
Thank you for looking. Presently, when running tests, we need to know cluster version to know what to expect. Many tests directly or indirectly use methods to behave according to cluster version [1]. Presently we get version as kubeadmin when available or person needs to set version manually [2]. Granularity doesn't need to be very important for this use case. As a user, if I'm given access to a cluster, I need to know cluster version just to use proper `oc` version. `oc` is only supported for version +/- 1 minor server version. So if I am given access to a cluster version 4.4, then I need to know that I need to use `oc` 4.3 to 4.5. This use case also does not require high precision. wrt developing tools use case, I see a ODO [3] that we support. How do we make sure that it is compatible with target cluster? I can't quickly see how it manage to detect cluster version? Maybe that tool can be an example of how this is being handled presently? In case that I as a user want to check what cluster version is running to make sure that it runs a version without known vulnerabilities before putting my data on it, then I need a precise version what presently I don't see where to get from without administrative access. [1] https://github.com/openshift/verification-tests/blob/master/lib/environment.rb#L327-L345 [2] https://github.com/openshift/verification-tests/blob/master/lib/environment.rb#L280-L287 [3] https://github.com/openshift/odo
> `oc` is only supported for version +/- 1 minor server version. This is certainly not how I've been using oc. Can you link docs to back this up? And this case seems like it would be already handled by whatever 'oc version' uses to extract the Kubernetes API-server version (I said https://$API/apis/config.openshift.io/v1/clusteroperators/openshift-apiserver in comment 10, but upon closer inspection of 'oc -v=8 version' it appears that 403ed for me, and the API-server version actually came from https://$API/version). Clients connecting to servers should be in charge of at least warning, if not erroring, if they diverge too far from the server's supported API. Doesn't seems like something that needs an anonymous or general-user OpenShift version endpoint to me. > Many tests directly or indirectly use methods to behave according to cluster version [1] Trying to find tests that a general user could run which would require this, I see tests comparing the cluster version with ClusterOperator version information [1]. But if you have read access to ClusterOperator, you probably also have read access to ClusterVersion. And if there is a common class of users that can access ClusterOperator but not ClusterVersion who run these tests, let's talk about that use case in more detail. I don't see any other hits for ocp_version in this repo. Can you point me at another consumer? > In case that I as a user want to check what cluster version is running to make sure that it runs a version without known vulnerabilities before putting my data on it, then I need a precise version what presently I don't see where to get from without administrative access. This seems like an administrator problem, or a user <-> adminstrator problem. If the admin falls behind, they can figure that out themselves, and should be patching their cluster (e.g. by updating the OpenShift core and/or non-core additions). Cluster users should be able to trust the admin to do this (and to continue to stay up to date). If you don't trust the admin to stay patched, you should not host your material on that cluster, regardless of whether they happen to currently be up to date or not. [1]: https://github.com/openshift/verification-tests/blob/773c7a58a1607d2f0b12076f8da308b57934ee2c/features/networking/operator.feature#L15
We do not have time to fix the bug in this sprint as we are working on higher priority bugs and features. Hence we are adding UpcomingSprint now, and we'll revisit this in the next sprint.
>> `oc` is only supported for version +/- 1 minor server version. > This is certainly not how I've been using oc. Can you link docs to back this up? https://docs.openshift.com/container-platform/4.5/release_notes/versioning-policy.html And I have experienced in the past situations where `oc` was practically unusable with diverging server version of 3-4 minor versions. Given `oc` can't detect cluster version and user has no way to see cluster version, it is a likely case that user could use an incorrect one which in turn can lead to unexpected errors. > Trying to find tests that a general user could run which would require this Within test scenarios we usually use steps like `Given the master version >= "4.4"`. You can grep for "the master version " in verification-tests repo as well into the cucushift repo where we hold higher tier tests. Since 4.x I haven't heard of us testing clusters without having cluster admin access. I'm not sure as I don't have overview of everything team does, but maybe relevance of this case is at least lower than in 3.x where had to test dedicated and other clusters without cluster admin credentials. > Cluster users should be able to trust the admin to do this (and to continue to stay up to date). This is an ideal world situation that one can never rely on. How many times we saw apparently trustworthy providers lacking good security practices in recent years? IIRC there were some 3.x versions where getting exact version from non-admin was not possible. There were some versions where it was possible to see by non-admin but only in web console. Eventually it became available as API, most likely after customer request. I'm not sure what is the reason to remove this ability from 4.x. I don't see anything changed that made such ability less useful or more harmful than during 3.x.
>>> `oc` is only supported for version +/- 1 minor server version. >> >> This is certainly not how I've been using oc. Can you link docs to back this up? > > https://docs.openshift.com/container-platform/4.5/release_notes/versioning-policy.html The only sentence mentioning "support" there is: A 4.3 server may have additional capabilities that a 4.2 oc cannot use and a 4.3 oc may have additional capabilities that are not supported by a 4.2 server. But my 4.2 oc and 4.2 workflows around it should continue to work perfectly fine (and be supported, until 4.2 goes EOL, which it has) with 4.3 and 4.4 and 4.5 releases. And my 4.5 oc should continue to work perfectly fine with older clusters, as long as I stick to 4.2 workflows (e.g. no new-in-4.5 commands or flags). Particular features may be deprecated and eventually, after sufficient minor versions, removed, and that might break my old oc binary and workflows. But as long as I am not involving deprecated features, I don't see a problem with larger oc/cluster divergence. > You can grep for "the master version " in verification-tests repo... Ahh, turns up things like [1], where we grow additional steps for CSI as the 4.y version advances. Are there docs/release notes that explain things like: And the expression should be true> pvc("csi-pvc").capacity(cached: false) == "2Gi" being something that CSI supports in 4.3 but not in 4.2? >> Cluster users should be able to trust the admin to do this (and to continue to stay up to date). > > This is an ideal world situation that one can never rely on. If you don't trust your admin to stay up to date, how frequently do you poll installed versions to see if they're slipping? Why do you trust them not to stub in some API to lie to your client about which version is installed? > Eventually it became available as API, most likely after customer request. Do we have records of that discussion? [1]: https://github.com/openshift/verification-tests/pull/1202/files#diff-84ae663de1ffca59e6132981bcf64164R7
Moving it to 4.7.0 as it is not critical for 4.6
Sorry for delay. Digging up old discussions is time consuming and I was procrastinating but I've got a fresh example [1]. Developer reads support policy [2] as only ±1 version compatibility will be guaranteed. Also in support doc [1] the wording is > oc client may not be able to access server features. Which doesn't specify whether these cluster features are new or not. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1860789#c11 [2] https://docs.openshift.com/container-platform/4.5/release_notes/versioning-policy.html > Are there docs/release notes that explain things like: ... being something that CSI supports in 4.3 but not in 4.2? I'm not sure what kind of docs you are asking about. With each new release, team tries to run all tests for previous release. In case something breaks, it is investigated whether the change is expected or not by communicating with dev team. If change is expected, then test case becomes dependent on cluster version in one way or another. How we organize test suite is not the point though. It is an example that things do change between releases, some commands stop working and new ones are introduced. So any tool that has to work with different OpenShift versions has to consider cluster version (ideally without asking user to specify that manually). > If you don't trust your admin to stay up to date, how frequently do you poll installed versions to see if they're slipping? Maybe after I read about a major CVE in the news. > Why do you trust them not to stub in some API to lie to your client about which version is installed? Because I will not assume evil intent but I'm likely to assume potential negligence. > Do we have records of that discussion? I found originally the client version API became implemented in bug 1353355. Other places where I found version endpoint was discussed: https://issues.redhat.com/browse/CONSOLE-2155 bug 1471717 To summarize, request is to allow non-admin users obtain cluster version string as present by `oc get clusterversion` without the extra details. Ideally at the same API entrypoint as 3.x.
Another recent example of feature supported only on version `n-1` but not any earlier is [1]. You can see `oc` support PR [2] is marked for backport to 4.5 only. I think these are clear indication that no more than one minor release difference between `oc` and cluster is actually supported. I hope this convinces you that the issue is actual for cli as well API users. [1] https://github.com/openshift/enhancements/pull/323/files#diff-9384ee79f743e4ca12a12a9691ca450aR87 [2] https://github.com/openshift/oc/pull/521
> You can see `oc` support PR [2] is marked for backport to 4.5 only. Something of a hack, but for cases where you're using oc, you can look at the version you pull out of the cluster: $ curl -s https://downloads-openshift-console.apps.build01.ci.devcluster.openshift.com/amd64/linux/oc.tar | tar -xv oc $ ./oc version --client Client Version: openshift-clients-4.5.0-202006231303.p0-9-ge40bd2dd9
*** Bug 1889919 has been marked as a duplicate of this bug. ***
Moving to the next sprint
Trevor, what is the downside of giving anonymous access to "oc get clusterversion"?
There's a lot of information in ClusterVersion, I'm not sure if all cluster admins would be on board with making all of it public. But my main concern here is that the presence of a particular string in some ClusterVersion status property is probably not what consumers actually care about. The versioning policy linked from comment 15 and later says: > The OpenShift Container Platform version must match between master and node hosts, excluding temporary mismatches during cluster upgrades. But my view of updates is more in the "continuously reconciling" space than in the "occasional, temporary skew" space. Say that, after inspecting ClusterVersion, you feel like "the cluster" is actually at 4.5.11. But maybe you got that information from the current target, and the Kube API (or whatever API you're actually going to hit) was still at... whatever Kube API version is part of OCP 4.4.27? Or maybe you'd chained through a few partial, retargeted updates, and the Kube API pods were actually running images from OCP 4.4.15? As far as generic clients go, I think the best approach is to just talk to the service you're attempting to talk to, and work out a version you can both speak, instead of pulling an OCP version string, making expectations about the service you're attempting to talk to, and then requiring those assumptions to hold up once you actually start talking to the service. For compliance-validation, life is a bit easier, because you can use out-of-band checks to ensure that the cluster is all happy and leveled at a particular OCP version, including all the managed components and their APIs being at their expected versions, and then launch your compliance tests ('ocp-is-compliant --version 4.5.11 --kubeconfig path/to/config', or whatever).
Today I heard about bug 1850656, where folks are also discussing exposing the OpenShift version to all authenticated users. I'm closing this bug as a dup of that one (even though that one came later), because it includes more discussion around how folks plan to expose the value. *** This bug has been marked as a duplicate of bug 1850656 ***