Bug 1867148 - descheduler pod has panic error: runtime.boundsError
Summary: descheduler pod has panic error: runtime.boundsError
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-scheduler
Version: 4.6
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.6.0
Assignee: Mike Dame
QA Contact: zhou ying
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-08-07 13:51 UTC by zhou ying
Modified: 2020-10-27 16:26 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-27 16:26:34 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift descheduler pull 36 0 None closed Bug 1867148: Pull upstream changes (RemoveDuplicates panic bugfix) 2020-08-25 11:54:22 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 16:26:53 UTC

Description zhou ying 2020-08-07 13:51:26 UTC
Description of problem:
Descheduler pod will CrashLoopBackOff with error: 
E0807 13:35:30.950616       1 runtime.go:78] Observed a panic: runtime.boundsError{x:0, y:0, signed:true, code:0x0} (runtime error: index out of range [0] with length 0)
goroutine 1 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x159d940, 0xc000958400)
	/go/src/sigs.k8s.io/descheduler/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/sigs.k8s.io/descheduler/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82
panic(0x159d940, 0xc000958400)
	/opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/runtime/panic.go:969 +0x166
sigs.k8s.io/descheduler/pkg/descheduler/strategies.RemoveDuplicatePods(0x18cf720, 0xc000042068, 0x18fc020, 0xc0003a8c60, 0xc000136001, 0x0, 0x0, 0xc000158000, 0x6, 0x6, ...)
	/go/src/sigs.k8s.io/descheduler/pkg/descheduler/strategies/duplicates.go:88 +0xe54
sigs.k8s.io/descheduler/pkg/descheduler.RunDeschedulerStrategies.func1()
	/go/src/sigs.k8s.io/descheduler/pkg/descheduler/descheduler.go:107 +0x2d0
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0001f3b38)
	/go/src/sigs.k8s.io/descheduler/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006cbb38, 0x1893000, 0xc0000af500, 0x1, 0xc0000d6000)
	/go/src/sigs.k8s.io/descheduler/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xa3
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0001f3b38, 0xdf8475800, 0x0, 0x1, 0xc0000d6000)
	/go/src/sigs.k8s.io/descheduler/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/go/src/sigs.k8s.io/descheduler/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
sigs.k8s.io/descheduler/pkg/descheduler.RunDeschedulerStrategies(0x18cf720, 0xc000042068, 0xc0003b0080, 0xc0003acc30, 0xc0003546e0, 0xe, 0xc0000d6000, 0xc0001f3cd8, 0x1373c5a)
	/go/src/sigs.k8s.io/descheduler/pkg/descheduler/descheduler.go:82 +0x468
sigs.k8s.io/descheduler/pkg/descheduler.Run(0xc0003b0080, 0xc0001f3d58, 0xc0001f3d58)
	/go/src/sigs.k8s.io/descheduler/pkg/descheduler/descheduler.go:60 +0x17d
sigs.k8s.io/descheduler/cmd/descheduler/app.Run(...)
	/go/src/sigs.k8s.io/descheduler/cmd/descheduler/app/server.go:60
sigs.k8s.io/descheduler/cmd/descheduler/app.NewDeschedulerCommand.func1(0xc0002f5b80, 0xc0003ac9f0, 0x0, 0x3)
	/go/src/sigs.k8s.io/descheduler/cmd/descheduler/app/server.go:44 +0x60
github.com/spf13/cobra.(*Command).execute(0xc0002f5b80, 0xc00003c0d0, 0x3, 0x3, 0xc0002f5b80, 0xc00003c0d0)
	/go/src/sigs.k8s.io/descheduler/vendor/github.com/spf13/cobra/command.go:830 +0x29d
github.com/spf13/cobra.(*Command).ExecuteC(0xc0002f5b80, 0x23c1a30, 0x0, 0x0)
	/go/src/sigs.k8s.io/descheduler/vendor/github.com/spf13/cobra/command.go:914 +0x2fb
github.com/spf13/cobra.(*Command).Execute(...)
	/go/src/sigs.k8s.io/descheduler/vendor/github.com/spf13/cobra/command.go:864
main.main()
	/go/src/sigs.k8s.io/descheduler/cmd/descheduler/descheduler.go:32 +0xba
panic: runtime error: index out of range [0] with length 0 [recovered]
	panic: runtime error: index out of range [0] with length 0

goroutine 1 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/sigs.k8s.io/descheduler/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x105
panic(0x159d940, 0xc000958400)
	/opt/rh/go-toolset-1.14/root/usr/lib/go-toolset-1.14-golang/src/runtime/panic.go:969 +0x166
sigs.k8s.io/descheduler/pkg/descheduler/strategies.RemoveDuplicatePods(0x18cf720, 0xc000042068, 0x18fc020, 0xc0003a8c60, 0xc000136001, 0x0, 0x0, 0xc000158000, 0x6, 0x6, ...)
	/go/src/sigs.k8s.io/descheduler/pkg/descheduler/strategies/duplicates.go:88 +0xe54
sigs.k8s.io/descheduler/pkg/descheduler.RunDeschedulerStrategies.func1()
	/go/src/sigs.k8s.io/descheduler/pkg/descheduler/descheduler.go:107 +0x2d0
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0001f3b38)
	/go/src/sigs.k8s.io/descheduler/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006cbb38, 0x1893000, 0xc0000af500, 0x1, 0xc0000d6000)
	/go/src/sigs.k8s.io/descheduler/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xa3
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0001f3b38, 0xdf8475800, 0x0, 0x1, 0xc0000d6000)
	/go/src/sigs.k8s.io/descheduler/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/go/src/sigs.k8s.io/descheduler/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
sigs.k8s.io/descheduler/pkg/descheduler.RunDeschedulerStrategies(0x18cf720, 0xc000042068, 0xc0003b0080, 0xc0003acc30, 0xc0003546e0, 0xe, 0xc0000d6000, 0xc0001f3cd8, 0x1373c5a)
	/go/src/sigs.k8s.io/descheduler/pkg/descheduler/descheduler.go:82 +0x468
sigs.k8s.io/descheduler/pkg/descheduler.Run(0xc0003b0080, 0xc0001f3d58, 0xc0001f3d58)
	/go/src/sigs.k8s.io/descheduler/pkg/descheduler/descheduler.go:60 +0x17d
sigs.k8s.io/descheduler/cmd/descheduler/app.Run(...)
	/go/src/sigs.k8s.io/descheduler/cmd/descheduler/app/server.go:60
sigs.k8s.io/descheduler/cmd/descheduler/app.NewDeschedulerCommand.func1(0xc0002f5b80, 0xc0003ac9f0, 0x0, 0x3)
	/go/src/sigs.k8s.io/descheduler/cmd/descheduler/app/server.go:44 +0x60
github.com/spf13/cobra.(*Command).execute(0xc0002f5b80, 0xc00003c0d0, 0x3, 0x3, 0xc0002f5b80, 0xc00003c0d0)
	/go/src/sigs.k8s.io/descheduler/vendor/github.com/spf13/cobra/command.go:830 +0x29d
github.com/spf13/cobra.(*Command).ExecuteC(0xc0002f5b80, 0x23c1a30, 0x0, 0x0)
	/go/src/sigs.k8s.io/descheduler/vendor/github.com/spf13/cobra/command.go:914 +0x2fb
github.com/spf13/cobra.(*Command).Execute(...)
	/go/src/sigs.k8s.io/descheduler/vendor/github.com/spf13/cobra/command.go:864
main.main()
	/go/src/sigs.k8s.io/descheduler/cmd/descheduler/descheduler.go:32 +0xba


Version-Release number of selected component (if applicable):
clusterkubedescheduleroperator.4.6.0-202008061759.p0

How reproducible:
always

Steps to Reproduce:
1. Install the descheduler operator from webconsole
2. Edit the kubedescheduler operator as follow:
  strategies:
  - name: RemoveDuplicates
  - name: RemovePodsViolatingInterPodAntiAffinity
  - name: RemovePodsViolatingNodeTaints

3. As normal user create pod:
apiVersion: v1
kind: Pod
metadata:
  name: dedicated-nodes
  annotations:
    descheduler.alpha.kubernetes.io/evict: "true"
spec:
  containers:
    - image: "docker.io/ocpqe/hello-pod"
      name: hello-pod

4. Check the node of the pod running:
   `oc get po -o wide`

5. Taint the node with :
   `oc adm taint node <node1> dedicated=special-user:NoSchedule`

6. Check the descheduler pod's log
   `oc logs -f po/cluster-f9d487d4c-t9zs2` 

Actual results:
6. The descheduler pod will panic with error as in the description part. 

Expected results:
6. Descheduler pod running as well. 

Additional info:

Comment 1 Mike Dame 2020-08-10 13:42:41 UTC
Think I have an upstream fix for this in https://github.com/kubernetes-sigs/descheduler/pull/369

Will pull this in with the final 1.19 GA rebase

Comment 2 Mike Dame 2020-08-10 20:24:18 UTC
Downstream PR created at https://github.com/openshift/descheduler/pull/36

Comment 6 zhou ying 2020-08-18 01:32:46 UTC
Can't reproduce the issue now. will verify it. 

[root@dhcp-140-138 ~]# oc get csv
NAME                                                   DISPLAY                     VERSION                 REPLACES   PHASE
clusterkubedescheduleroperator.4.6.0-202008111711.p0   Kube Descheduler Operator   4.6.0-202008111711.p0              Succeeded


[root@dhcp-140-138 ~]# oc logs -f po/cluster-59668b45bf-5mg6r |grep zhouy
I0818 01:28:57.046840       1 evictions.go:148] Evicted pod: "dedicated-nodes" in namespace "zhouy" (NodeTaint)
I0818 01:28:57.047119       1 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"zhouy", Name:"dedicated-nodes", UID:"a984ff74-bde2-4110-b33a-317010926fe2", APIVersion:"v1", ResourceVersion:"1043743", FieldPath:""}): type: 'Normal' reason: 'Descheduled' pod evicted by sigs.k8s.io/descheduler (NodeTaint)

Comment 8 errata-xmlrpc 2020-10-27 16:26:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.