Bug 1828836 (OCPRHV-62-4.5) - Connection failure to ovirt API will panic
Summary: Connection failure to ovirt API will panic
Keywords:
Status: CLOSED ERRATA
Alias: OCPRHV-62-4.5
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cloud Compute
Version: 4.4
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.5.0
Assignee: Roy Golan
QA Contact: Jan Zmeskal
URL: https://issues.redhat.com/browse/OCPR...
Whiteboard:
Depends On:
Blocks: 1829768 OCPRHV-62-4.4.z
TreeView+ depends on / blocked
 
Reported: 2020-04-28 13:16 UTC by Roy Golan
Modified: 2020-07-13 17:32 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1829768 OCPRHV-62-4.4.z (view as bug list)
Environment:
Last Closed: 2020-07-13 17:32:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-api-provider-ovirt pull 43 0 None closed Bug 1828836: Connection failure to ovirt API will panic 2021-01-18 11:36:58 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:32:39 UTC

Description Roy Golan 2020-04-28 13:16:22 UTC
Description of problem:

Mishandling an error of in oVirt API client will panic the container.

This was fixed in ovirt/go-ovirt, just need to bump the vendor version

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1. Spin the the container when the connection can't be open.
2. See the container panics



Actual results:
Panic and process exit

Expected results:
return the error

Additional info:
E0427 18:15:45.293408       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 306 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x1c55860, 0x2fb6a60)
	/go/cluster-api-provider-ovirt/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/cluster-api-provider-ovirt/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82
panic(0x1c55860, 0x2fb6a60)
	/usr/local/go/src/runtime/panic.go:679 +0x1b2
github.com/ovirt/go-ovirt.(*Connection).testToken(0xc000112a00, 0x0, 0x0, 0x0)
	/go/cluster-api-provider-ovirt/vendor/github.com/ovirt/go-ovirt/connection.go:96 +0x238
github.com/ovirt/go-ovirt.(*Connection).Test(0xc000112a00, 0x0, 0x0)
	/go/cluster-api-provider-ovirt/vendor/github.com/ovirt/go-ovirt/connection.go:75 +0x2f
github.com/openshift/cluster-api-provider-ovirt/pkg/cloud/ovirt/machine.(*OvirtActuator).getConnection(0xc0000c34a0, 0xc0008823e0, 0x15, 0xc0009452e0, 0x11, 0xc0001bb430, 0x40bda6, 0xc0001bb3d0)
	/go/cluster-api-provider-ovirt/pkg/cloud/ovirt/machine/actuator.go:66 +0xfb
github.com/openshift/cluster-api-provider-ovirt/pkg/cloud/ovirt/machine.(*OvirtActuator).Exists(0xc0000c34a0, 0x2172480, 0xc000040070, 0x0, 0xc0003eb080, 0x1, 0x0, 0x213f500)
	/go/cluster-api-provider-ovirt/pkg/cloud/ovirt/machine/actuator.go:155 +0x1fe
github.com/openshift/cluster-api/pkg/controller/machine.(*ReconcileMachine).Reconcile(0xc0003aee80, 0xc00046a920, 0x15, 0xc00046a900, 0x16, 0xc0001bbcd8, 0xc00010c900, 0xc00010c658, 0x2149420)
	/go/cluster-api-provider-ovirt/vendor/github.com/openshift/cluster-api/pkg/controller/machine/controller.go:277 +0x766
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc0000c35e0, 0x1ccca00, 0xc0005444a0, 0x0)
	/go/cluster-api-provider-ovirt/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:216 +0x162
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc0000c35e0, 0x430200)
	/go/cluster-api-provider-ovirt/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:192 +0xcb
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc0000c35e0)
	/go/cluster-api-provider-ovirt/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:171 +0x2b
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00015da80)
	/go/cluster-api-provider-ovirt/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5e
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00015da80, 0x3b9aca00, 0x0, 0x1, 0xc00009a180)
	/go/cluster-api-provider-ovirt/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
k8s.io/apimachinery/pkg/util/wait.Until(0xc00015da80, 0x3b9aca00, 0xc00009a180)
	/go/cluster-api-provider-ovirt/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start
	/go/cluster-api-provider-ovirt/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157 +0x32e
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x40 pc=0x10fe068]

Comment 3 Jan Zmeskal 2020-05-14 09:35:38 UTC
Hello Roy, I have a few questions to this:
1. Which container specifically do you have in mind here?
2. How can this be reproduced? I have an idea about it but I'd like you to chime in. My idea is:
2.1 Start installation as usual
2.2 After bootstrap and master nodes have been created in RHV, block ports 80 and 443 on RHV engine machine
2.3 Verify that the container (specify which one please) outputs error, but does not exit with go exception.

Does this make sense?

Comment 4 Roy Golan 2020-05-18 18:58:01 UTC
(In reply to Jan Zmeskal from comment #3)
> Hello Roy, I have a few questions to this:
> 1. Which container specifically do you have in mind here?
> 2. How can this be reproduced? I have an idea about it but I'd like you to
> chime in. My idea is:
> 2.1 Start installation as usual
> 2.2 After bootstrap and master nodes have been created in RHV, block ports
> 80 and 443 on RHV engine machine
> 2.3 Verify that the container (specify which one please) outputs error, but
> does not exit with go exception.

we are talking about the cluster-api-provider-ovirt here, and specifically the machine-controller container.
you can get the logs with:

 oc logs $(oc get pods -l api=clusterapi -n openshift-machine-api -o name) -c machine-controller

> Does this make sense?

I suggest you wait a bit and you see first that this pod is up and running and is monitoring
the instances and then continue like you suggested. 
An additional step can be to stop the firewall block and see it the container renews the connection.

Comment 5 Sandro Bonazzola 2020-05-19 08:50:54 UTC
Also tracked in https://issues.redhat.com/browse/OCPRHV-62

Comment 6 Jan Zmeskal 2020-05-22 09:40:13 UTC
Verified with:
openshift-install-linux-4.5.0-0.nightly-2020-05-22-054554
rhvm-4.3.10.3-0.1.master.el7.noarch

Verification steps:
1. Have a running OCP on RHV cluster
2. oc logs $(oc get pods -l api=clusterapi -n openshift-machine-api -o name) -c machine-controller
3. Make sure that machine-controller container is constantly updating RHV machine status
4. On RHV engine machine, do the following
firewall-cmd --remove-service=ovirt-http
firewall-cmd --remove-service=ovirt-https
5. Make sure machine-controller container does not panic and exit and instead logs error messages:

I0522 09:30:38.992218       1 controller.go:164] Reconciling Machine "six-lf8tg-worker-0-xx4r8"
I0522 09:30:38.992299       1 controller.go:376] Machine "six-lf8tg-worker-0-xx4r8" in namespace "openshift-machine-api" doesn't specify "cluster.k8s.io/cluster-name" label, assuming nil cluster
E0522 09:30:41.063606       1 machineservice.go:307] Failed to fetch VM by name
E0522 09:30:41.063718       1 controller.go:279] Failed to check if machine "six-lf8tg-worker-0-xx4r8" exists: Post https://<censored>/ovirt-engine/sso/oauth/token: dial tcp <censored>:443: connect: no route to host
{"level":"error","ts":1590139841.0640545,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"machine_controller","request":"openshift-machine-api/six-lf8tg-worker-0-xx4r8","error":"Post https://system-ge-6.rhev.lab.eng.brq.redhat.com/ovirt-engine/sso/oauth/token: dial tcp 10.37.142.36:443: connect: no route to host","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/cluster-api-provider-ovirt/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/cluster-api-provider-ovirt/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/cluster-api-provider-ovirt/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:192\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/cluster-api-provider-ovirt/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:171\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/cluster-api-provider-ovirt/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/cluster-api-provider-ovirt/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/cluster-api-provider-ovirt/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}

6. On RHV engine machine,
firewall-cmd --add-service=ovirt-http
firewall-cmd --add-service=ovirt-https
7. Make sure machine resumes normal operation

Comment 7 Brian Ward 2020-06-07 19:46:12 UTC
My lab reproduces this error pretty reliably, _I think_ as it goes thru my /etc/resolv.conf file looking for a valid return for my ovirt engine.  The first entry misses, so it hits this error every time. That's my guess here.  

This is my read because the coredns file has the following:

        forward . /etc/resolv.conf {
            policy sequential
        }

Which I think means it hits the service IP 172.30.0.10, goes to an endpoint pod, pod then hits the first entry for forwarding and just fails on that entry.  

Inside the dns pod:

sh-4.2# cat /etc/resolv.conf 
search yarp.example.com
nameserver 192.168.25.220
nameserver 192.168.25.50
nameserver 8.8.8.8

The .220 fails and the service returns that failure before trying the next server, the .50 that would return correctly.

When I replace the coredns file with the following everything works fine:

        forward . 192.168.25.50

I am on 4.4.5 GA. 


While I am inclined to think that this bugfix will resolve the problem, I also wonder if the use of forward with policy sequential in the coredns config is a good thing.


E0606 18:41:01.531691       1 controller.go:279] Failed to check if machine "yarp-vmg2m-master-1" exists: Post https://ovirt-engine-1.example.com/ovirt-engine/sso/oauth/token: dial tcp: lookup ovirt-engine-1.example.com on 172.30.0.10:53: no such host
{"level":"error","ts":1591468861.5317755,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"machine_controller","request":"openshift-machine-api/yarp-vmg2m-master-1","error":"Post https://ovirt-engine-1.example.com/ovirt-engine/sso/oauth/token: dial tcp: lookup ovirt-engine-1.example.com on 172.30.0.10:53: no such host","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/cluster-api-provider-ovirt/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/cluster-api-provider-ovirt/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/cluster-api-provider-ovirt/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:192\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/cluster-api-provider-ovirt/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:171\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/cluster-api-provider-ovirt/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/cluster-api-provider-ovirt/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/cluster-api-provider-ovirt/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
I0606 18:41:02.532256       1 controller.go:164] Reconciling Machine "yarp-vmg2m-master-2"
I0606 18:41:02.532312       1 controller.go:376] Machine "yarp-vmg2m-master-2" in namespace "openshift-machine-api" doesn't specify "cluster.k8s.i
E0606 18:41:02.541945       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 267 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x1c57b60, 0x2fc2b20)
	/go/cluster-api-provider-ovirt/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/cluster-api-provider-ovirt/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82
panic(0x1c57b60, 0x2fc2b20)
	/opt/rh/go-toolset-1.13/root/usr/lib/go-toolset-1.13-golang/src/runtime/panic.go:679 +0x1b2
github.com/ovirt/go-ovirt.(*Connection).testToken(0xc00012a000, 0x0, 0x0, 0x0)
	/go/cluster-api-provider-ovirt/vendor/github.com/ovirt/go-ovirt/connection.go:96 +0x238
github.com/ovirt/go-ovirt.(*Connection).Test(0xc00012a000, 0x0, 0x0)
	/go/cluster-api-provider-ovirt/vendor/github.com/ovirt/go-ovirt/connection.go:75 +0x2f
github.com/openshift/cluster-api-provider-ovirt/pkg/cloud/ovirt/machine.(*OvirtActuator).getConnection(0xc00012af00, 0xc000565320, 0x15, 0xc0004224a0, 0x11, 0x400, 0x7febdc8a6200, 0xc0006853e8)
	/go/cluster-api-provider-ovirt/pkg/cloud/ovirt/machine/actuator.go:66 +0xfb
github.com/openshift/cluster-api-provider-ovirt/pkg/cloud/ovirt/machine.(*OvirtActuator).Exists(0xc00012af00, 0x21755e0, 0xc0000b4038, 0x0, 0xc000462580, 0x1, 0x0, 0x2142660)
	/go/cluster-api-provider-ovirt/pkg/cloud/ovirt/machine/actuator.go:155 +0x1fe
github.com/openshift/cluster-api/pkg/controller/machine.(*ReconcileMachine).Reconcile(0xc0000bcf00, 0xc000565320, 0x15, 0xc000565260, 0x13, 0xc000685cd8, 0xc00056a1b0, 0xc0001200a8, 0x214c580)
	/go/cluster-api-provider-ovirt/vendor/github.com/openshift/cluster-api/pkg/controller/machine/controller.go:277 +0x766
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc00012afa0, 0x1ccee00, 0xc0000aeaa0, 0x0)
	/go/cluster-api-provider-ovirt/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:216 +0x162
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc00012afa0, 0xc000444500)
	/go/cluster-api-provider-ovirt/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:192 +0xcb
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc00012afa0)
	/go/cluster-api-provider-ovirt/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:171 +0x2b
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00025e0d0)
	/go/cluster-api-provider-ovirt/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5e
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00025e0d0, 0x3b9aca00, 0x0, 0x45b401, 0xc000096180)
	/go/cluster-api-provider-ovirt/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
k8s.io/apimachinery/pkg/util/wait.Until(0xc00025e0d0, 0x3b9aca00, 0xc000096180)
	/go/cluster-api-provider-ovirt/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start
	/go/cluster-api-provider-ovirt/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157 +0x32e
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x40 pc=0x1101198]

goroutine 267 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/cluster-api-provider-ovirt/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x105
panic(0x1c57b60, 0x2fc2b20)
	/opt/rh/go-toolset-1.13/root/usr/lib/go-toolset-1.13-golang/src/runtime/panic.go:679 +0x1b2
github.com/ovirt/go-ovirt.(*Connection).testToken(0xc00012a000, 0x0, 0x0, 0x0)
	/go/cluster-api-provider-ovirt/vendor/github.com/ovirt/go-ovirt/connection.go:96 +0x238
github.com/ovirt/go-ovirt.(*Connection).Test(0xc00012a000, 0x0, 0x0)
	/go/cluster-api-provider-ovirt/vendor/github.com/ovirt/go-ovirt/connection.go:75 +0x2f
github.com/openshift/cluster-api-provider-ovirt/pkg/cloud/ovirt/machine.(*OvirtActuator).getConnection(0xc00012af00, 0xc000565320, 0x15, 0xc0004224a0, 0x11, 0x400, 0x7febdc8a6200, 0xc0006853e8)
	/go/cluster-api-provider-ovirt/pkg/cloud/ovirt/machine/actuator.go:66 +0xfb
github.com/openshift/cluster-api-provider-ovirt/pkg/cloud/ovirt/machine.(*OvirtActuator).Exists(0xc00012af00, 0x21755e0, 0xc0000b4038, 0x0, 0xc000462580, 0x1, 0x0, 0x2142660)
	/go/cluster-api-provider-ovirt/pkg/cloud/ovirt/machine/actuator.go:155 +0x1fe
github.com/openshift/cluster-api/pkg/controller/machine.(*ReconcileMachine).Reconcile(0xc0000bcf00, 0xc000565320, 0x15, 0xc000565260, 0x13, 0xc000685cd8, 0xc00056a1b0, 0xc0001200a8, 0x214c580)
	/go/cluster-api-provider-ovirt/vendor/github.com/openshift/cluster-api/pkg/controller/machine/controller.go:277 +0x766
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc00012afa0, 0x1ccee00, 0xc0000aeaa0, 0x0)
	/go/cluster-api-provider-ovirt/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:216 +0x162
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc00012afa0, 0xc000444500)
	/go/cluster-api-provider-ovirt/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:192 +0xcb
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker(0xc00012afa0)
	/go/cluster-api-provider-ovirt/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:171 +0x2b
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00025e0d0)
	/go/cluster-api-provider-ovirt/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5e
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00025e0d0, 0x3b9aca00, 0x0, 0x45b401, 0xc000096180)
	/go/cluster-api-provider-ovirt/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
k8s.io/apimachinery/pkg/util/wait.Until(0xc00025e0d0, 0x3b9aca00, 0xc000096180)
	/go/cluster-api-provider-ovirt/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start
	/go/cluster-api-provider-ovirt/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:157 +0x32e

Comment 8 Jan Zmeskal 2020-06-08 08:24:03 UTC
Hi Brian, the backport of this fix is being tracked in this BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1830486 I see this has been moved to VERIFIED on the 1st of June. I don't know in exactly which OCP z-stream will it land, but it's most probably not OCP4.4.5. Could you please verify if you hit this bug with latest OCP4.4 nightly build?

Anyway, I have an impression from your comment that what you talk about is actually a different issue - coredns pod failing to contact other DNS servers if the first one is unhealthy. I suppose there might be different possible root causes of that (wrong configuration of forward plugin, bug in coredns, etc.). Therefore I believe it merits opening a new BZ and continuing the discussion there. Could you please open such BZ?

Comment 9 Brian Ward 2020-06-16 11:32:51 UTC
Jan,

You are correct.  The DNS problem is a secondary issue I am still having.  I have confirmed that this particular issue (null pointer reference) is resolved in 4.4.0-0.nightly-2020-06-01-021027.

I'll open another BZ for the DNS issue, as this last install did succeed in bringing up two worker nodes before going back to DNS failures and failing to bring up the third worker.

Thanks,
Brian

Comment 10 errata-xmlrpc 2020-07-13 17:32:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.