Description of problem: There is currently no way to expose the OCP cluster topology through the API. The goal would be to fetch the masters, etcd and nodes configuration through the API. This would ease integration with third-party products that want to rely on OCP. One example is CloudForms. Unless we have access to the Ansible inventory, we cannot get information about how the OCP deployment is setup. So, we cannot interact and scale the platform. One option would be to store all the configuration in etcd, in a dedicated key/value tree. Then, the configuration could be exposed through API. It would also be great to prevent inconsistency in configuration, because etcd would be the single source of truth. Version-Release number of selected component (if applicable): 3.6.x How reproducible: Always Steps to Reproduce: n/a Actual results: OCP cluster topology and configuration is not available through API. Expected results: OCP cluster topology and configuration is available through API. Additional info:
Fabien, did you look in labels at Node level, this is one called role.. I see app and infra as values.
Loïc, I have seen that too. Indeed, we have tags mapping with a default setup that maps the 'region' label to the 'ocp_region' category. So, it gives us a little idea of what the topology is, but we don't really introspect the platform. And it provides no information if you don't use the 'region' label. I have customers who prefer using 'role' or 'purpose' labels to differentiate nodes between infra, appli, storage... and use 'region' label to map there cloud infrastructure, like 'region=eu-west' when deploying to AWS. Moreover, it doesn't give you any information on the node/master/etcd/lb/whatever configuration. For example, in a node we have the information about the maximum number of pods per node, that can be overriden by 'kubeletArgs' option. We can already have some information when running 'oc describe node', so we are almost covered for nodes. But we have no such thing for the masters, and they hold the most important part of the configuration.
(In reply to Fabien Dupont from comment #2) > Loïc, I have seen that too. Indeed, we have tags mapping with a default > setup that maps the 'region' label to the 'ocp_region' category. So, it > gives us a little idea of what the topology is, but we don't really > introspect the platform. > > And it provides no information if you don't use the 'region' label. I have > customers who prefer using 'role' or 'purpose' labels to differentiate nodes > between infra, appli, storage... and use 'region' label to map there cloud > infrastructure, like 'region=eu-west' when deploying to AWS. > > Moreover, it doesn't give you any information on the > node/master/etcd/lb/whatever configuration. For example, in a node we have > the information about the maximum number of pods per node, that can be > overriden by 'kubeletArgs' option. We can already have some information when > running 'oc describe node', so we are almost covered for nodes. But we have > no such thing for the masters, and they hold the most important part of the > configuration. Ok, this is about collecting more inventory information about OCP installation. Nodes Label is not sufficient. Can you give a list the exact items you are looking for?
I've been looking into the data that gets returned from the Machine/MachineSet/MachineDeployment objects from oc but I don't see the data that I was hoping to acquire; specifically the k8s/OCP version that's running on the Machine and whether or not the Machine is hosting the control plane components or not (e.g. master/not master).