Bug 1928581
| Summary: | proxy/cluster settings not updating due to missing local CNO image | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Vincent Lours <vlours> |
| Component: | Node | Assignee: | Qi Wang <qiwan> |
| Node sub component: | Kubelet | QA Contact: | MinLi <minmli> |
| Status: | CLOSED DEFERRED | Docs Contact: | |
| Severity: | high | ||
| Priority: | high | CC: | amcdermo, aos-bugs, nagrawal, palonsor, rphillips, wking |
| Version: | 4.6 | Keywords: | Reopened |
| Target Milestone: | --- | Flags: | vlours:
needinfo-
|
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-10-31 17:52:31 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Vincent Lours
2021-02-15 00:45:29 UTC
Dropping the severity as there is a documented workaround (see the bugzilla description). Assign to MCO since they manage the controller config. Sorry for the delay in response. What is the current status of this bug? What's is the MCO's role? To make sure a wrongly configured proxy doesn't get written to the nodes? In the PR https://github.com/openshift/machine-config-operator/pull/2539 Qi attempted to test validity of proxy by testing the image-pull using podman. However, because MCO image doesn't contain podman, test fail CI. Besides, there is a debate what would be a more appropriate place to do such validation. There could be other similar configurations that, if wrong, can cause cluster disruption. Closing this BZ as we don't have a clear path forward for a fix. If needed, please reopen and suggest ways/places we can do this testing. You can do it with skopeo inspect. If skopeo is not available in MCO image, I don't understand why it cannot be added. Other similar configurations that can cause something like this may require separate BZs, let's focus on this one. Skopeo can be used inside a container by pulling skopeo image[1]. So a tool to pull image is needed. In the PR we directly tried to use podman. I have followed the instructions from https://www.redhat.com/sysadmin/podman-inside-container to install podman inside the registry.ci.openshift.org/ocp/4.9:base image. Many errors occurred it seems Podman package was not available in ocp image at that time. I can get back to this and retry to see if it's available in current version of ocp image. [1] https://www.redhat.com/sysadmin/how-run-skopeo-container Skopeo is a binary that can be installed, either in your workstation or a container, like "dnf install skopeo". Thanks. I just tried, both podman and skopeo are installed successfully in OCP image. I converted the PR to use skopeo and see if it can pass the CI. Close this bug per the discussion: https://github.com/openshift/machine-config-operator/pull/2539#issuecomment-1292410131, MCO is not responsible for proxy validation, and the original customer is closed. If further discussions are needed, this can be tracked by a new feature request. Changing to CLOSED DEFERRED. This is not a problem we mustn't solve but a problem we are not solving now. Just to make this clear: I agree with MCO not being the right team to address this, but strongly disagree in this to not be solved. This should have been tested and solved at CNO level. However, I see practical to open a new bug to the CNO component once we have other occurrences. |