Bug 1999668
Summary: | openshift-install destroy cluster panic's when given invalid credentials to cloud provider (Azure Stack Hub) | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Mike Gahagan <mgahagan> |
Component: | Installer | Assignee: | Kiran Thyagaraja <kiran> |
Installer sub component: | openshift-installer | QA Contact: | Mike Gahagan <mgahagan> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | low | ||
Priority: | medium | CC: | kiran, mstaeble, padillon |
Version: | 4.9 | ||
Target Milestone: | --- | ||
Target Release: | 4.10.0 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | No Doc Update | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2022-03-10 16:06:30 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Mike Gahagan
2021-08-31 14:18:32 UTC
Can you be more specific about the invalid credentials? I tried: - removing ~/.azure/osServicePrincipal.json - changing keys in the creds to be invalid - using public azure credentials But these all worked gracefully. I'm able to reproduce the same issue using a bad service principal on an IPI cluster in AzureCloud. INFO Credentials loaded from file "/home/m/.azure/osServicePrincipal.json" E0831 17:13:27.809892 456240 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 1 [running]: k8s.io/apimachinery/pkg/util/runtime.logPanic(0xe7eb440, 0x19bd1d20) /go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86 panic(0xe7eb440, 0x19bd1d20) /usr/lib/golang/src/runtime/panic.go:965 +0x1b9 github.com/openshift/installer/pkg/destroy/azure.deleteResourceGroup(0x11cc6970, 0xc0002806c0, 0x11b1d7e0, 0xc000b71ad0, 0x11b3e420, 0xc000a38990, 0x0, 0x0, 0xdf8475800, 0xd18c2e2800, ...) /go/src/github.com/openshift/installer/pkg/destroy/azure/azure.go:470 +0x1c7 github.com/openshift/installer/pkg/destroy/azure.(*ClusterUninstaller).Run.func2(0x11cc6970, 0xc000280660) /go/src/github.com/openshift/installer/pkg/destroy/azure/azure.go:157 +0x158 k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1() /go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:185 +0x37 k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000f518c8) /go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000f518c8, 0x11b3dda0, 0xc000f56e70, 0xc000280601, 0xc00128c060) /go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000f518c8, 0x3b9aca00, 0x0, 0xc043cbc5d82c7e01, 0xc00128c060) /go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98 k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext(0x11cc6970, 0xc000280660, 0xc000f519d8, 0x3b9aca00, 0x0, 0xc000f54a01) /go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:185 +0xa6 k8s.io/apimachinery/pkg/util/wait.UntilWithContext(...) /go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:99 github.com/openshift/installer/pkg/destroy/azure.(*ClusterUninstaller).Run(0xc000724800, 0x0, 0x0) /go/src/github.com/openshift/installer/pkg/destroy/azure/azure.go:153 +0x2da main.runDestroyCmd(0x7ffd135d414c, 0x18, 0xc000bbd260, 0x6099e0) /go/src/github.com/openshift/installer/cmd/openshift-install/destroy.go:64 +0xb7 main.newDestroyClusterCmd.func1(0xc0009b7080, 0xc00049f740, 0x0, 0x2) /go/src/github.com/openshift/installer/cmd/openshift-install/destroy.go:50 +0x79 github.com/spf13/cobra.(*Command).execute(0xc0009b7080, 0xc00049f6e0, 0x2, 0x2, 0xc0009b7080, 0xc00049f6e0) /go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:854 +0x2c2 github.com/spf13/cobra.(*Command).ExecuteC(0xc0009b6580, 0xc000d1fde8, 0x1, 0x1) /go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:958 +0x375 github.com/spf13/cobra.(*Command).Execute(...) /go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:895 main.installerMain() /go/src/github.com/openshift/installer/cmd/openshift-install/main.go:72 +0x2fe main.main() /go/src/github.com/openshift/installer/cmd/openshift-install/main.go:50 +0x259 panic: runtime error: invalid memory address or nil pointer dereference [recovered] panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x48 pc=0xd075f67] goroutine 1 [running]: k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x109 panic(0xe7eb440, 0x19bd1d20) /usr/lib/golang/src/runtime/panic.go:965 +0x1b9 github.com/openshift/installer/pkg/destroy/azure.deleteResourceGroup(0x11cc6970, 0xc0002806c0, 0x11b1d7e0, 0xc000b71ad0, 0x11b3e420, 0xc000a38990, 0x0, 0x0, 0xdf8475800, 0xd18c2e2800, ...) /go/src/github.com/openshift/installer/pkg/destroy/azure/azure.go:470 +0x1c7 github.com/openshift/installer/pkg/destroy/azure.(*ClusterUninstaller).Run.func2(0x11cc6970, 0xc000280660) /go/src/github.com/openshift/installer/pkg/destroy/azure/azure.go:157 +0x158 k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1() /go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:185 +0x37 k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000f518c8) /go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000f518c8, 0x11b3dda0, 0xc000f56e70, 0xc000280601, 0xc00128c060) /go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000f518c8, 0x3b9aca00, 0x0, 0xc043cbc5d82c7e01, 0xc00128c060) /go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98 k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext(0x11cc6970, 0xc000280660, 0xc000f519d8, 0x3b9aca00, 0x0, 0xc000f54a01) /go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:185 +0xa6 k8s.io/apimachinery/pkg/util/wait.UntilWithContext(...) /go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:99 github.com/openshift/installer/pkg/destroy/azure.(*ClusterUninstaller).Run(0xc000724800, 0x0, 0x0) /go/src/github.com/openshift/installer/pkg/destroy/azure/azure.go:153 +0x2da main.runDestroyCmd(0x7ffd135d414c, 0x18, 0xc000bbd260, 0x6099e0) /go/src/github.com/openshift/installer/cmd/openshift-install/destroy.go:64 +0xb7 main.newDestroyClusterCmd.func1(0xc0009b7080, 0xc00049f740, 0x0, 0x2) /go/src/github.com/openshift/installer/cmd/openshift-install/destroy.go:50 +0x79 github.com/spf13/cobra.(*Command).execute(0xc0009b7080, 0xc00049f6e0, 0x2, 0x2, 0xc0009b7080, 0xc00049f6e0) /go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:854 +0x2c2 github.com/spf13/cobra.(*Command).ExecuteC(0xc0009b6580, 0xc000d1fde8, 0x1, 0x1) /go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:958 +0x375 github.com/spf13/cobra.(*Command).Execute(...) /go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:895 main.installerMain() /go/src/github.com/openshift/installer/cmd/openshift-install/main.go:72 +0x2fe main.main() /go/src/github.com/openshift/installer/cmd/openshift-install/main.go:50 +0x259 [m@fedora 17:13:27 ~/tests/49-azure-install]$ ./openshift-install version ./openshift-install 4.9.0-0.nightly-2021-08-29-010334 built from commit a8aa8384cb9af7edb5f19f8da324276c9df2f56d release image registry.ci.openshift.org/ocp/release@sha256:57c87dff1c29de881608160da96ff4243ce05444c8271cddca41006191d70aac release architecture amd64 If I enter the same information after deleting the osServicePrincipal.json file I get a panic also: [m@fedora 17:13:43 ~/tests/49-azure-install]$ ./openshift-install destroy cluster --dir clusters/mgahagan-143108 ? azure subscription id XXXXXXXX ? azure tenant id XXXXXXXX ? azure service principal client id XXXXXXX ? azure service principal client secret [? for help] ***** INFO Saving user credentials to "/home/m/.azure/osServicePrincipal.json" INFO Credentials loaded from file "/home/m/.azure/osServicePrincipal.json" E0831 17:18:09.066616 456410 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) In case it helps, the subscription, tenant and service principal id and secret are all valid values as far as Azure is concerned but the service principal cannot install clusters. As is the case with ASH, I can delete the cluster once I use the correct information. So to make sure I am understanding this correctly: these are valid public Azure credentials, but the credentials do not have sufficient permissions? yes those are valid Azure credentials, but without sufficient permissions to create a cluster. I am lowering the severity as this to low as it only affects attempts to destroy a cluster that was never created. just to clarify, the cluster I was trying to delete did in fact exist but the service principal was not valid for the Azure Stack Cloud I was using. The SP was valid on the Azure Public Cloud however. (In reply to Mike Gahagan from comment #9) > just to clarify, the cluster I was trying to delete did in fact exist but > the service principal was not valid for the Azure Stack Cloud I was using. > The SP was valid on the Azure Public Cloud however. Thanks. I think the severity is the same, though. Even if the installer did not crash, the destroy still would not have succeeded with the wrong credentials. Confirmed attempts to destroy an ASH cluster fail as expected with a "FATAL Failed to destroy cluster:" error rather than panic'ing. Was able to destroy the cluster once valid credentials are provided. Tested with 4.10.0-0.nightly-2021-12-14-083101 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0056 |