Bug 1752409
| Summary: | [ci][vsphere-upi] failed to create aws route53 record for ARRDATAIllegalIPv4Address | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | sheng.lao <shlao> |
| Component: | Installer | Assignee: | Joseph Callen <jcallen> |
| Installer sub component: | openshift-installer | QA Contact: | Johnny Liu <jialiu> |
| Status: | CLOSED DEFERRED | Docs Contact: | |
| Severity: | high | ||
| Priority: | high | CC: | adahiya, jlebon |
| Version: | 4.3.0 | Keywords: | Reopened |
| Target Milestone: | --- | ||
| Target Release: | 4.4.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-01-30 21:34:28 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
sheng.lao
2019-09-16 10:09:22 UTC
I noticed this late last week monitoring the vsphere prow jobs. It seems to occur randomly and I was unable to reproduce with the cluster bot. Maybe its an issue with IPAM. I will investigate further. This is specific to our test environment setup, moving to 4.3 but leaving as high priority. Since investigating the current vSphere CI issues I have not seen these errors. Certainly we should keep this BZ - just an update. How often is this occurring? It occurs hardly rarely during my duty of monitoring build-cop jobs I haven't see this error for quite a while. Going to mark as notabug and will reopen if I see this again. Just saw this again now: https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-vsphere-upi-serial-4.3/438 ``` Error: Error applying plan: 12 error(s) occurred: * module.dns.aws_route53_record.control_plane_nodes[2]: 1 error(s) occurred: * aws_route53_record.control_plane_nodes.2: [ERR]: Error building changeset: InvalidChangeBatch: [Invalid Resource Record: FATAL problem: ARRDATAIllegalIPv4Address (Value is not a valid IPv4 address) encountered with ''] status code: 400, request id: 8b376c38-db2b-40b7-9be0-747c3ebcd21c * module.dns.aws_route53_record.control_plane_nodes[0]: 1 error(s) occurred: * aws_route53_record.control_plane_nodes.0: [ERR]: Error building changeset: InvalidChangeBatch: [Invalid Resource Record: FATAL problem: ARRDATAIllegalIPv4Address (Value is not a valid IPv4 address) encountered with ''] status code: 400, request id: e345d3ba-0abb-4407-a799-c87e5dda4db4 * module.dns.aws_route53_record.etcd_a_nodes[1]: 1 error(s) occurred: * aws_route53_record.etcd_a_nodes.1: [ERR]: Error building changeset: InvalidChangeBatch: [Invalid Resource Record: FATAL problem: ARRDATAIllegalIPv4Address (Value is not a valid IPv4 address) encountered with ''] status code: 400, request id: 06989f72-5581-4d85-9c15-a6c3eae8018f * module.dns.aws_route53_record.api-internal: 1 error(s) occurred: * aws_route53_record.api-internal: [ERR]: Error building changeset: InvalidChangeBatch: [Invalid Resource Record: FATAL problem: ARRDATAIllegalIPv4Address (Value is not a valid IPv4 address) encountered with ''] status code: 400, request id: d48a38a0-ee51-4ca0-89e6-c42d9a0407d1 * module.dns.aws_route53_record.etcd_a_nodes[0]: 1 error(s) occurred: ... ``` Tentatively re-opening. This always happens if a job runs at 7:xx am est. I have no explanation for it. https://ci-search-ci-search-next.svc.ci.openshift.org/?search=ARRDATAIllegalIPv4Address&maxAge=336h&context=2&type=all there doesn't seem to be any failures in last 2 weeks so closing.. |