Bug 1952448
Summary: | Switch from Managed to Disabled mode: no IP removed from configuration and no container metal3-static-ip-manager stopped | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Oleg Sher <osher> |
Component: | Bare Metal Hardware Provisioning | Assignee: | sdasu |
Bare Metal Hardware Provisioning sub component: | cluster-baremetal-operator | QA Contact: | Aleksandra Malykhin <amalykhi> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | medium | ||
Priority: | medium | CC: | amalykhi, aos-bugs, rbartal |
Version: | 4.8 | Keywords: | Triaged |
Target Milestone: | --- | ||
Target Release: | 4.8.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | No Doc Update | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2021-07-27 23:02:52 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Oleg Sher
2021-04-22 09:45:41 UTC
When the Provisioning CR contains just the Provisioning Network and Provisioning OS Download URL, metal3 pod ends up with just 8 containers and that is expected behavior. Spec: Provisioning Network: Disabled Provisioning OS Download URL: http://192.168.111.1/images/rhcos-48.83.202103221318-0-openstack.x86_64.qcow2.gz?sha25 6=323e7ba4ba3448e340946543c963823136e1367ed0b229d2a05e1cf537642bb8 [stack@localhost dev-scripts]$ oc get pods -n openshift-machine-api NAME READY STATUS RESTARTS AGE cluster-autoscaler-operator-68ff977bd5-q5k6l 2/2 Running 0 60m cluster-baremetal-operator-846d767c44-lph69 2/2 Running 0 60m machine-api-controllers-c6fb94c57-8lnlp 7/7 Running 1 54m machine-api-operator-868d49f997-llzhc 2/2 Running 0 60m metal3-5f476b595b-tj872 8/8 Running 0 3m34s metal3-image-cache-5xzjw 1/1 Running 0 52m metal3-image-cache-hlx8s 1/1 Running 0 52m metal3-image-cache-kslhq 1/1 Running 0 52m But, when Provisioning CR is edited to only change the Provisioning Network from Managed to Disabled (all other fields are left intact), then we see that 9 containers are active after the metal3 pod terminates and restarts. So, the conditions under which this error is seen is not listed accurately in the description. Verified on the OCP version Cluster version is 4.8.0-rc.1 1. Verify that there are 10/10 pods running [kni@provisionhost-0-0 ~]$ oc get pods -n openshift-machine-api ... metal3-64fdf54f4d-26tkn 10/10 Running 0 50m 2. Save the config file [kni@provisionhost-0-0 ~]$ oc get provisioning -o yaml > new_disabled_mode.yaml 3. Remove the lines from the config file provisioningDHCPRange, provisioningIP, provisioningInterface, provisioningNetworkCIDR and change the provisioningNetwork type The spec should be looks like: spec: provisioningNetwork: Disabled provisioningOSDownloadURL: http://registry.ocp-edge-cluster-0.qe.lab.redhat.com:8080/images/rhcos-48.84.202106091622-0-openstack.x86_64.qcow2.gz?sha256=2efc7539f200ffea150272523a9526ba393a9a0b8312b40031b13bfdeda36fde 4. Apply the new config file [kni@provisionhost-0-0 ~]$ oc apply -f set_disabled_mode.yaml provisioning.metal3.io/provisioning-configuration configured 5. Check the pods status ( only 8/8 pods are running) [kni@provisionhost-0-0 ~]$ oc get pods -n openshift-machine-api NAME READY STATUS RESTARTS AGE ... metal3-76c6758645-5l5zc 8/8 Running 0 81s 6. Verify the config file [kni@provisionhost-0-0 ~]$ oc get provisioning -o yaml ... spec: provisioningNetwork: Disabled provisioningOSDownloadURL: http://registry.ocp-edge-cluster-0.qe.lab.redhat.com:8080/images/rhcos-48.84.202106091622-0-openstack.x86_64.qcow2.gz?sha256=2efc7539f200ffea150272523a9526ba393a9a0b8312b40031b13bfdeda36fde ... Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438 |