Bug 1695324
Summary: | Unit test flake post 1.13 rebase | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Clayton Coleman <ccoleman> |
Component: | Cloud Compute | Assignee: | Jan Chaloupka <jchaloup> |
Status: | CLOSED ERRATA | QA Contact: | Jianwei Hou <jhou> |
Severity: | urgent | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.1.0 | CC: | agarcial |
Target Milestone: | --- | ||
Target Release: | 4.1.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-06-04 10:47:00 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Clayton Coleman
2019-04-02 20:12:08 UTC
Is it related to machineapi components or the cause is still unknown? Jusk asking since the bug is reported against Cloud Compute component. Unknown. It needs immediate triage and action. - github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/envelope TestIntermittentConnectionLoss: looks like network issue due to (failed to create connection to unix socket: @kms-socket.sock, error: dial unix @kms-socket.sock: connect: connection refused) - github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/cloudprovider/providers/aws TestCreateDisk: just wrong order of tags: Tags: [{ + Key: "kubernetes.io/cluster/clusterid.test", + Value: "owned" + },{ Key: "KubernetesCluster", Value: "clusterid.test" - },{ - Key: "kubernetes.io/cluster/clusterid.test", - Value: "owned" }] - github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/controller/podautoscaler TestLegacyScaleUpUnreadyLessScale: the test needs to be updated (k8s 1.13 client-go patch implementation in generated fake clients changed a bit, this might be related) - github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/kubelet/cm/devicemanager TestNewManagerImplStartProbeMode: (E0402 19:35:19.844615 13396 plugin_watcher.go:120] error stat file /tmp/volume/device_plugin355861278/device-plugin.sock failed: stat /tmp/volume/device_plugin355861278/device-plugin.sock: no such file or directory when handling create event: "/tmp/volume/device_plugin355861278/device-plugin.sock": CREATE) Potential fix for `github.com/openshift/origin/vendor/k8s.io/apiserver/pkg/storage/value/encrypt/envelope TestIntermittentConnectionLoss:`: https://github.com/openshift/origin/pull/22463 Fix for `github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/cloudprovider/providers/aws TestCreateDisk`: https://github.com/openshift/origin/pull/22477 Fixing github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/controller/podautoscaler TestLegacyScaleUpUnreadyLessScale: https://github.com/openshift/origin/pull/22490 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758 |