Bug 1924020 - panic: runtime error: index out of range [0] with length 0
Summary: panic: runtime error: index out of range [0] with length 0
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: RHCOS
Version: 4.7
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: 4.8.0
Assignee: Timothée Ravier
QA Contact: Michael Nguyen
URL:
Whiteboard:
Depends On: 1867987
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-02-02 12:34 UTC by Andreas Karis
Modified: 2021-11-19 00:31 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of: 1867987
Environment:
Last Closed: 2021-07-27 22:37:35 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github coreos toolbox pull 68 0 None closed rhcos-toolbox: Ignore podman '<no value>' output for empty labels 2021-02-15 09:44:27 UTC
Red Hat Product Errata RHSA-2021:2438 0 None None None 2021-07-27 22:38:06 UTC

Comment 1 Andreas Karis 2021-02-02 12:35:29 UTC
See https://bugzilla.redhat.com/show_bug.cgi?id=1867987

 Andreas Karis 2021-01-30 17:34:17 UTC

I hit the same in RHCOS 4.7:

[root@openshift-master-0 ~]# rpm -qa | grep toolbox
toolbox-0.0.8-2.rhaos4.7.el8.noarch

[root@openshift-master-0 ~]# cat ~/.toolboxrc 
REGISTRY=docker.io
IMAGE=centos:latest

[root@openshift-master-0 ~]# toolbox
.toolboxrc file detected, overriding defaults...
Spawning a container 'toolbox-root' with image 'docker.io/centos:latest'
Detected RUN label in the container image. Using that as the default...
panic: runtime error: index out of range [0] with length 0

goroutine 1 [running]:
panic(0x56048dd710a0, 0xc00028ebe0)
	/usr/lib/golang/src/runtime/panic.go:1064 +0x471 fp=0xc0002ad700 sp=0xc0002ad648 pc=0x56048c271cf1
runtime.goPanicIndex(0x0, 0x0)
	/usr/lib/golang/src/runtime/panic.go:88 +0xa5 fp=0xc0002ad748 sp=0xc0002ad700 pc=0x56048c26fb05
github.com/containers/libpod/pkg/domain/infra/abi.generateCommand(0x0, 0x0, 0xc00028f3e0, 0x1f, 0x7ffdecd47786, 0xc, 0x0, 0x0, 0x40056048dc2d4a0, 0xffffffffffffffff, ...)
	/builddir/build/BUILD/podman-f5c92373ec5e9d118cf6c098d1ae1b770e5055ae/_build/src/github.com/containers/libpod/pkg/domain/infra/abi/containers_runlabel.go:197 +0x8c2 fp=0xc0002ad820 sp=0xc0002ad748 pc=0x56048d3b1c82
github.com/containers/libpod/pkg/domain/infra/abi.generateRunlabelCommand(0x0, 0x0, 0xc0001ce000, 0xc0005d7420, 0x0, 0x2, 0x0, 0x0, 0x0, 0x0, ...)
	/builddir/build/BUILD/podman-f5c92373ec5e9d118cf6c098d1ae1b770e5055ae/_build/src/github.com/containers/libpod/pkg/domain/infra/abi/containers_runlabel.go:147 +0xcf fp=0xc0002ad978 sp=0xc0002ad820 pc=0x56048d3b0e5f
github.com/containers/libpod/pkg/domain/infra/abi.(*ContainerEngine).ContainerRunlabel(0xc0005b6780, 0x56048ded3cc0, 0xc000044098, 0x7ffdecd47793, 0x3, 0x7ffdecd47797, 0x17, 0xc0005d7420, 0x0, 0x2, ...)
	/builddir/build/BUILD/podman-f5c92373ec5e9d118cf6c098d1ae1b770e5055ae/_build/src/github.com/containers/libpod/pkg/domain/infra/abi/containers_runlabel.go:34 +0x1b2 fp=0xc0002adb60 sp=0xc0002ad978 pc=0x56048d3b04d2
github.com/containers/libpod/cmd/podman/containers.runlabel(0x56048ec84940, 0xc0005d7400, 0x2, 0x4, 0x0, 0x0)
	/builddir/build/BUILD/podman-f5c92373ec5e9d118cf6c098d1ae1b770e5055ae/_build/src/github.com/containers/libpod/cmd/podman/containers/runlabel.go:80 +0x169 fp=0xc0002adc88 sp=0xc0002adb60 pc=0x56048d441179
github.com/containers/libpod/vendor/github.com/spf13/cobra.(*Command).execute(0x56048ec84940, 0xc00003c0a0, 0x4, 0x4, 0x56048ec84940, 0xc00003c0a0)
	/builddir/build/BUILD/podman-f5c92373ec5e9d118cf6c098d1ae1b770e5055ae/_build/src/github.com/containers/libpod/vendor/github.com/spf13/cobra/command.go:838 +0x455 fp=0xc0002add60 sp=0xc0002adc88 pc=0x56048cc793e5
github.com/containers/libpod/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x56048ec97720, 0xc000044098, 0x56048dc3bce0, 0x56048ed40e90)
	/builddir/build/BUILD/podman-f5c92373ec5e9d118cf6c098d1ae1b770e5055ae/_build/src/github.com/containers/libpod/vendor/github.com/spf13/cobra/command.go:943 +0x319 fp=0xc0002ade38 sp=0xc0002add60 pc=0x56048cc79ee9
github.com/containers/libpod/vendor/github.com/spf13/cobra.(*Command).Execute(...)
	/builddir/build/BUILD/podman-f5c92373ec5e9d118cf6c098d1ae1b770e5055ae/_build/src/github.com/containers/libpod/vendor/github.com/spf13/cobra/command.go:883
github.com/containers/libpod/vendor/github.com/spf13/cobra.(*Command).ExecuteContext(...)
	/builddir/build/BUILD/podman-f5c92373ec5e9d118cf6c098d1ae1b770e5055ae/_build/src/github.com/containers/libpod/vendor/github.com/spf13/cobra/command.go:876
main.Execute()
	/builddir/build/BUILD/podman-f5c92373ec5e9d118cf6c098d1ae1b770e5055ae/_build/src/github.com/containers/libpod/cmd/podman/root.go:90 +0xee fp=0xc0002adeb8 sp=0xc0002ade38 pc=0x56048d53e4ae
main.main()
	/builddir/build/BUILD/podman-f5c92373ec5e9d118cf6c098d1ae1b770e5055ae/_build/src/github.com/containers/libpod/cmd/podman/main.go:77 +0x18e fp=0xc0002adf88 sp=0xc0002adeb8 pc=0x56048d53ddee
runtime.main()
	/usr/lib/golang/src/runtime/proc.go:203 +0x202 fp=0xc0002adfe0 sp=0xc0002adf88 pc=0x56048c274772
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1373 +0x1 fp=0xc0002adfe8 sp=0xc0002adfe0 pc=0x56048c2a3881

goroutine 2 [force gc (idle)]:
runtime.gopark(0x56048de66980, 0x56048ed0d840, 0x1411, 0x1)
	/usr/lib/golang/src/runtime/proc.go:304 +0xe6 fp=0xc000066fb0 sp=0xc000066f90 pc=0x56048c274b46
runtime.goparkunlock(...)
	/usr/lib/golang/src/runtime/proc.go:310
runtime.forcegchelper()
	/usr/lib/golang/src/runtime/proc.go:253 +0xbb fp=0xc000066fe0 sp=0xc000066fb0 pc=0x56048c2749eb
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1373 +0x1 fp=0xc000066fe8 sp=0xc000066fe0 pc=0x56048c2a3881
created by runtime.init.7
	/usr/lib/golang/src/runtime/proc.go:242 +0x37

goroutine 3 [GC sweep wait]:
runtime.gopark(0x56048de66980, 0x56048ed0ff00, 0x140c, 0x1)
	/usr/lib/golang/src/runtime/proc.go:304 +0xe6 fp=0xc0000677a8 sp=0xc000067788 pc=0x56048c274b46
runtime.goparkunlock(...)
	/usr/lib/golang/src/runtime/proc.go:310
runtime.bgsweep(0xc00007e000)
	/usr/lib/golang/src/runtime/mgcsweep.go:89 +0x135 fp=0xc0000677d8 sp=0xc0000677a8 pc=0x56048c261075
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1373 +0x1 fp=0xc0000677e0 sp=0xc0000677d8 pc=0x56048c2a3881
created by runtime.gcenable
	/usr/lib/golang/src/runtime/mgc.go:214 +0x5e

goroutine 4 [GC scavenge wait]:
runtime.gopark(0x56048de66980, 0x56048ed0fec0, 0x140d, 0x1)
	/usr/lib/golang/src/runtime/proc.go:304 +0xe6 fp=0xc000067f78 sp=0xc000067f58 pc=0x56048c274b46
runtime.goparkunlock(...)
	/usr/lib/golang/src/runtime/proc.go:310
runtime.bgscavenge(0xc00007e000)
	/usr/lib/golang/src/runtime/mgcscavenge.go:285 +0x294 fp=0xc000067fd8 sp=0xc000067f78 pc=0x56048c25f6f4
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1373 +0x1 fp=0xc000067fe0 sp=0xc000067fd8 pc=0x56048c2a3881
created by runtime.gcenable
	/usr/lib/golang/src/runtime/mgc.go:215 +0x80

goroutine 5 [finalizer wait]:
runtime.gopark(0x56048de66980, 0x56048ed40dd8, 0xc0001b1410, 0x1)
	/usr/lib/golang/src/runtime/proc.go:304 +0xe6 fp=0xc000066758 sp=0xc000066738 pc=0x56048c274b46
runtime.goparkunlock(...)
	/usr/lib/golang/src/runtime/proc.go:310
runtime.runfinq()
	/usr/lib/golang/src/runtime/mfinal.go:175 +0xa7 fp=0xc0000667e0 sp=0xc000066758 pc=0x56048c256b07
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1373 +0x1 fp=0xc0000667e8 sp=0xc0000667e0 pc=0x56048c2a3881
created by runtime.createfing
	/usr/lib/golang/src/runtime/mfinal.go:156 +0x63

goroutine 18 [GC worker (idle)]:
runtime.gopark(0x56048de66808, 0xc000487ef0, 0x1418, 0x0)
	/usr/lib/golang/src/runtime/proc.go:304 +0xe6 fp=0xc000068760 sp=0xc000068740 pc=0x56048c274b46
runtime.gcBgMarkWorker(0xc00004e000)
	/usr/lib/golang/src/runtime/mgc.go:1873 +0x105 fp=0xc0000687d8 sp=0xc000068760 pc=0x56048c25a505
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1373 +0x1 fp=0xc0000687e0 sp=0xc0000687d8 pc=0x56048c2a3881
created by runtime.gcBgMarkStartWorkers
	/usr/lib/golang/src/runtime/mgc.go:1821 +0x79

goroutine 8 [GC worker (idle)]:
runtime.gopark(0x56048de66808, 0xc00011a000, 0x1418, 0x0)
	/usr/lib/golang/src/runtime/proc.go:304 +0xe6 fp=0xc000068f60 sp=0xc000068f40 pc=0x56048c274b46
runtime.gcBgMarkWorker(0xc000050800)
	/usr/lib/golang/src/runtime/mgc.go:1873 +0x105 fp=0xc000068fd8 sp=0xc000068f60 pc=0x56048c25a505
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1373 +0x1 fp=0xc000068fe0 sp=0xc000068fd8 pc=0x56048c2a3881
created by runtime.gcBgMarkStartWorkers
	/usr/lib/golang/src/runtime/mgc.go:1821 +0x79

goroutine 34 [GC worker (idle)]:
runtime.gopark(0x56048de66808, 0xc000520010, 0x1418, 0x0)
	/usr/lib/golang/src/runtime/proc.go:304 +0xe6 fp=0xc000062760 sp=0xc000062740 pc=0x56048c274b46
runtime.gcBgMarkWorker(0xc000053000)
	/usr/lib/golang/src/runtime/mgc.go:1873 +0x105 fp=0xc0000627d8 sp=0xc000062760 pc=0x56048c25a505
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1373 +0x1 fp=0xc0000627e0 sp=0xc0000627d8 pc=0x56048c2a3881
created by runtime.gcBgMarkStartWorkers
	/usr/lib/golang/src/runtime/mgc.go:1821 +0x79

goroutine 9 [GC worker (idle)]:
runtime.gopark(0x56048de66808, 0xc000520020, 0x1418, 0x0)
	/usr/lib/golang/src/runtime/proc.go:304 +0xe6 fp=0xc000069760 sp=0xc000069740 pc=0x56048c274b46
runtime.gcBgMarkWorker(0xc000055800)
	/usr/lib/golang/src/runtime/mgc.go:1873 +0x105 fp=0xc0000697d8 sp=0xc000069760 pc=0x56048c25a505
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1373 +0x1 fp=0xc0000697e0 sp=0xc0000697d8 pc=0x56048c2a3881
created by runtime.gcBgMarkStartWorkers
	/usr/lib/golang/src/runtime/mgc.go:1821 +0x79

goroutine 19 [GC worker (idle)]:
runtime.gopark(0x56048de66808, 0xc00011a010, 0x1418, 0x0)
	/usr/lib/golang/src/runtime/proc.go:304 +0xe6 fp=0xc000528760 sp=0xc000528740 pc=0x56048c274b46
runtime.gcBgMarkWorker(0xc000058000)
	/usr/lib/golang/src/runtime/mgc.go:1873 +0x105 fp=0xc0005287d8 sp=0xc000528760 pc=0x56048c25a505
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1373 +0x1 fp=0xc0005287e0 sp=0xc0005287d8 pc=0x56048c2a3881
created by runtime.gcBgMarkStartWorkers
	/usr/lib/golang/src/runtime/mgc.go:1821 +0x79

goroutine 10 [GC worker (idle)]:
runtime.gopark(0x56048de66808, 0xc00011a020, 0x1418, 0x0)
	/usr/lib/golang/src/runtime/proc.go:304 +0xe6 fp=0xc000069f60 sp=0xc000069f40 pc=0x56048c274b46
runtime.gcBgMarkWorker(0xc00005a800)
	/usr/lib/golang/src/runtime/mgc.go:1873 +0x105 fp=0xc000069fd8 sp=0xc000069f60 pc=0x56048c25a505
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1373 +0x1 fp=0xc000069fe0 sp=0xc000069fd8 pc=0x56048c2a3881
created by runtime.gcBgMarkStartWorkers
	/usr/lib/golang/src/runtime/mgc.go:1821 +0x79

goroutine 55 [chan receive]:
runtime.gopark(0x56048de66780, 0xc000297df8, 0x7f2a162b170e, 0x2)
	/usr/lib/golang/src/runtime/proc.go:304 +0xe6 fp=0xc0005cfed0 sp=0xc0005cfeb0 pc=0x56048c274b46
runtime.chanrecv(0xc000297da0, 0xc0005cffb0, 0xc00051c001, 0xc000297da0)
	/usr/lib/golang/src/runtime/chan.go:525 +0x2eb fp=0xc0005cff60 sp=0xc0005cfed0 pc=0x56048c243dbb
runtime.chanrecv2(0xc000297da0, 0xc0005cffb0, 0x0)
	/usr/lib/golang/src/runtime/chan.go:412 +0x2b fp=0xc0005cff90 sp=0xc0005cff60 pc=0x56048c243abb
github.com/containers/libpod/vendor/k8s.io/klog.(*loggingT).flushDaemon(0x56048ed12a80)
	/builddir/build/BUILD/podman-f5c92373ec5e9d118cf6c098d1ae1b770e5055ae/_build/src/github.com/containers/libpod/vendor/k8s.io/klog/klog.go:1010 +0x8d fp=0xc0005cffd8 sp=0xc0005cff90 pc=0x56048cdd387d
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1373 +0x1 fp=0xc0005cffe0 sp=0xc0005cffd8 pc=0x56048c2a3881
created by github.com/containers/libpod/vendor/k8s.io/klog.init.0
	/builddir/build/BUILD/podman-f5c92373ec5e9d118cf6c098d1ae1b770e5055ae/_build/src/github.com/containers/libpod/vendor/k8s.io/klog/klog.go:411 +0xd8

goroutine 84 [syscall]:
syscall.Syscall6(0xe8, 0xa, 0xc00063fb74, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/golang/src/syscall/asm_linux_amd64.s:41 +0x5 fp=0xc00063faa8 sp=0xc00063faa0 pc=0x56048c2f7de5
github.com/containers/libpod/vendor/golang.org/x/sys/unix.EpollWait(0xa, 0xc00063fb74, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	/builddir/build/BUILD/podman-f5c92373ec5e9d118cf6c098d1ae1b770e5055ae/_build/src/github.com/containers/libpod/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x74 fp=0xc00063fb18 sp=0xc00063faa8 pc=0x56048c5d5b34
github.com/containers/libpod/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc00028ec80, 0x0, 0x0, 0x0)
	/builddir/build/BUILD/podman-f5c92373ec5e9d118cf6c098d1ae1b770e5055ae/_build/src/github.com/containers/libpod/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x93 fp=0xc00063fbe0 sp=0xc00063fb18 pc=0x56048c9b9d23
github.com/containers/libpod/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc0001a01e0)
	/builddir/build/BUILD/podman-f5c92373ec5e9d118cf6c098d1ae1b770e5055ae/_build/src/github.com/containers/libpod/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x1fa fp=0xc00064ffd8 sp=0xc00063fbe0 pc=0x56048c9b8f1a
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1373 +0x1 fp=0xc00064ffe0 sp=0xc00064ffd8 pc=0x56048c2a3881
created by github.com/containers/libpod/vendor/github.com/fsnotify/fsnotify.NewWatcher
	/builddir/build/BUILD/podman-f5c92373ec5e9d118cf6c098d1ae1b770e5055ae/_build/src/github.com/containers/libpod/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a7

goroutine 85 [select]:
runtime.gopark(0x56048de669d0, 0x0, 0x1809, 0x1)
	/usr/lib/golang/src/runtime/proc.go:304 +0xe6 fp=0xc000525d40 sp=0xc000525d20 pc=0x56048c274b46
runtime.selectgo(0xc000525f38, 0xc000525eb4, 0x3, 0x0, 0x0)
	/usr/lib/golang/src/runtime/select.go:316 +0xc79 fp=0xc000525e68 sp=0xc000525d40 pc=0x56048c285069
github.com/containers/libpod/vendor/github.com/cri-o/ocicni/pkg/ocicni.(*cniNetworkPlugin).monitorConfDir(0xc0005e60c0, 0xc0004f6910)
	/builddir/build/BUILD/podman-f5c92373ec5e9d118cf6c098d1ae1b770e5055ae/_build/src/github.com/containers/libpod/vendor/github.com/cri-o/ocicni/pkg/ocicni/ocicni.go:151 +0x1a3 fp=0xc000525fd0 sp=0xc000525e68 pc=0x56048c9bb073
runtime.goexit()
	/usr/lib/golang/src/runtime/asm_amd64.s:1373 +0x1 fp=0xc000525fd8 sp=0xc000525fd0 pc=0x56048c2a3881
created by github.com/containers/libpod/vendor/github.com/cri-o/ocicni/pkg/ocicni.initCNI
	/builddir/build/BUILD/podman-f5c92373ec5e9d118cf6c098d1ae1b770e5055ae/_build/src/github.com/containers/libpod/vendor/github.com/cri-o/ocicni/pkg/ocicni/ocicni.go:252 +0x3cb
/bin/toolbox: line 120: 45278 Aborted                 sudo podman container runlabel --name "$TOOLBOX_NAME" RUN "$TOOLBOX_IMAGE" 2>&1
/bin/toolbox: failed to runlabel on image 'docker.io/centos:latest'
[root@openshift-master-0 ~]#

CC: akaris@redhat.com
Private
Comment 7 Andreas Karis 2021-01-30 17:38:05 UTC

And that looks like: https://github.com/containers/podman/issues/8038

Private
Comment 8 Andreas Karis 2021-01-30 17:50:09 UTC

The problem is here:
https://github.com/coreos/toolbox/blame/f61c747c84ea4ea18b853aaab5e6318b2cd74162/rhcos-toolbox#L29

Podman returns "<no value>" and not "":
~~~
[root@openshift-master-0 ~]# sudo podman image inspect "docker.io/library/fedora" --format "{{.Labels.run}}"
<no value>
~~~
     26     local runlabel=$(image_runlabel)
     27     if ! container_exists; then
     28         echo "Spawning a container '$TOOLBOX_NAME' with image '$TOOLBOX_IMAGE'"
     29         if [[ -z "$runlabel" ]]; then
     30             container_run
     31             return
     32         else
     33             echo "Detected RUN label in the container image. Using that as the default..."
     34             container_runlabel
     35             return
     36         fi
     37     else
     38         echo "Container '$TOOLBOX_NAME' already exists. Trying to start..."
     39         echo "(To remove the container and start with a fresh toolbox, run: sudo podman rm '$TOOLBOX_NAME')"
     40     fi
~~~

With that change, it works:
~~~
cp /bin/toolbox /root/toolbox
vi /root/toolbox
~~~

~~~
[root@openshift-master-0 ~]# diff -i /bin/toolbox /root/toolbox 
29c29
<         if [[ -z "$runlabel" ]]; then
---
>         if [[ -z "$runlabel" ]] || [ "$runlabel" == "<no value>" ]; then
~~~

Without custom container:
~~~
[root@openshift-master-0 ~]# bash -x toolbox 
+ set -eo pipefail
+ trap cleanup EXIT
+ REGISTRY=registry.redhat.io
+ IMAGE=rhel8/support-tools
++ whoami
+ TOOLBOX_NAME=toolbox-root
+ TOOLBOXRC=/root/.toolboxrc
+ main
+ setup
+ '[' -f /root/.toolboxrc ']'
+ TOOLBOX_IMAGE=registry.redhat.io/rhel8/support-tools
+ [[ '' =~ ^(--help|-h)$ ]]
+ run
+ image_exists
+ sudo podman inspect registry.redhat.io/rhel8/support-tools
++ image_runlabel
++ sudo podman image inspect registry.redhat.io/rhel8/support-tools --format '{{.Labels.run}}'
+ local 'runlabel=podman run -it --name NAME --privileged --ipc=host --net=host --pid=host -e HOST=/host -e NAME=NAME -e IMAGE=IMAGE -v /run:/run -v /var/log:/var/log -v /etc/machine-id:/etc/machine-id -v /etc/localtime:/etc/localtime -v /:/host IMAGE'
+ container_exists
+ sudo podman inspect toolbox-root
+ echo 'Spawning a container '\''toolbox-root'\'' with image '\''registry.redhat.io/rhel8/support-tools'\'''
Spawning a container 'toolbox-root' with image 'registry.redhat.io/rhel8/support-tools'
+ [[ -z podman run -it --name NAME --privileged --ipc=host --net=host --pid=host -e HOST=/host -e NAME=NAME -e IMAGE=IMAGE -v /run:/run -v /var/log:/var/log -v /etc/machine-id:/etc/machine-id -v /etc/localtime:/etc/localtime -v /:/host IMAGE ]]
+ '[' 'podman run -it --name NAME --privileged --ipc=host --net=host --pid=host -e HOST=/host -e NAME=NAME -e IMAGE=IMAGE -v /run:/run -v /var/log:/var/log -v /etc/machine-id:/etc/machine-id -v /etc/localtime:/etc/localtime -v /:/host IMAGE' == '<no value>' ']'
+ echo 'Detected RUN label in the container image. Using that as the default...'
Detected RUN label in the container image. Using that as the default...
+ container_runlabel
+ sudo podman container runlabel --name toolbox-root RUN registry.redhat.io/rhel8/support-tools
~~~

With custom container:
~~~
[root@openshift-master-0 ~]# cat .toolboxrc 
REGISTRY=docker.io
IMAGE=fedora:latest
[root@openshift-master-0 ~]# bash -x toolbox 
+ set -eo pipefail
+ trap cleanup EXIT
+ REGISTRY=registry.redhat.io
+ IMAGE=rhel8/support-tools
++ whoami
+ TOOLBOX_NAME=toolbox-root
+ TOOLBOXRC=/root/.toolboxrc
+ main
+ setup
+ '[' -f /root/.toolboxrc ']'
+ echo '.toolboxrc file detected, overriding defaults...'
.toolboxrc file detected, overriding defaults...
+ source /root/.toolboxrc
++ REGISTRY=docker.io
++ IMAGE=fedora:latest
+ TOOLBOX_IMAGE=docker.io/fedora:latest
+ [[ '' =~ ^(--help|-h)$ ]]
+ run
+ image_exists
+ sudo podman inspect docker.io/fedora:latest
++ image_runlabel
++ sudo podman image inspect docker.io/fedora:latest --format '{{.Labels.run}}'
+ local 'runlabel=<no value>'
+ container_exists
+ sudo podman inspect toolbox-root
+ echo 'Spawning a container '\''toolbox-root'\'' with image '\''docker.io/fedora:latest'\'''
Spawning a container 'toolbox-root' with image 'docker.io/fedora:latest'
+ [[ -z <no value> ]]
+ '[' '<no value>' == '<no value>' ']'
+ container_run
+ sudo podman run --hostname toolbox --name toolbox-root --privileged --net=host --pid=host --ipc=host --tty --interactive -e HOST=/host -e NAME=toolbox-root -e IMAGE=fedora:latest --security-opt label=disable --volume /run:/run --volume /var/log:/var/log --volume /etc/machine-id:/etc/machine-id --volume /etc/localtime:/etc/localtime --volume /:/host docker.io/fedora:latest
[root@toolbox /]# cat /etc/redhat-release 
Fedora release 33 (Thirty Three)
~~~

Private
Comment 9 Andreas Karis 2021-01-30 17:52:53 UTC

Are we going to upgrade to github.com/containers/toolbox in RHEL 8.3 / CoreOS 4.7?  If so, will we be able to use custom containers? Otherwise, this can be fixed really easily, see my earlier comment.

Comment 3 Andreas Karis 2021-02-03 13:04:15 UTC
Just FYI: I clone this from the RHEL 8.3 bug as: 
* RHEL will fix this in 8.4 by moving to the new go toolbox

* OCP 4.7 will be on RHEL 8.3 and in the preview versions still uses the old bash toolbox

Either, RHCOS 4.7 would have to use the new toolbox command or would have to fix the bash script as suggested above.

Thanks,

Andreas

Comment 4 Timothée Ravier 2021-02-03 19:44:54 UTC
This is a consequence of the change I made for https://bugzilla.redhat.com/show_bug.cgi?id=1915318. I will push a fix and will try to push that as a .z stream fix for 4.7 if it does not make the release.

Comment 5 Andreas Karis 2021-02-04 09:44:33 UTC
Awesome, thanks!

Comment 10 Micah Abbott 2021-03-03 21:53:20 UTC
Since this got retargeted for 4.8, going to reset the status to MODIFIED as we have an updated version of `toolbox` build for 4.8 and I want the bots to correctly transition the BZ and attach it to an advisory.

Comment 12 Michael Nguyen 2021-03-08 19:09:06 UTC
Verified on4.8.0-0.nightly-2021-03-08-133419 which runs RHCOS 48.83.202103080317-0.



$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.8.0-0.nightly-2021-03-08-133419   True        False         10m     Cluster version is 4.8.0-0.nightly-2021-03-08-133419
$ oc get nodes
NAME                                         STATUS   ROLES    AGE   VERSION
ip-10-0-134-237.us-west-2.compute.internal   Ready    master   31m   v1.20.0+aa519d9
ip-10-0-149-29.us-west-2.compute.internal    Ready    worker   23m   v1.20.0+aa519d9
ip-10-0-162-236.us-west-2.compute.internal   Ready    worker   23m   v1.20.0+aa519d9
ip-10-0-185-140.us-west-2.compute.internal   Ready    master   31m   v1.20.0+aa519d9
ip-10-0-196-203.us-west-2.compute.internal   Ready    worker   24m   v1.20.0+aa519d9
ip-10-0-211-226.us-west-2.compute.internal   Ready    master   31m   v1.20.0+aa519d9
$ oc debug node/ip-10-0-196-203.us-west-2.compute.internal
Starting pod/ip-10-0-196-203us-west-2computeinternal-debug ...
To use host binaries, run `chroot /host`
If you don't see a command prompt, try pressing enter.
sh-4.2# chroot /host
sh-4.4# rpm-ostree status
State: idle
Deployments:
* pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6a29805478181c58ee8922085fc919cd19a15617f6e32ca9b0580c086fcfb41
              CustomOrigin: Managed by machine-config-operator
                   Version: 48.83.202103080317-0 (2021-03-08T03:20:38Z)

  ostree://646a9832dd0dc9fe174a2fc005863a9582186518a5476522a0e9bdccc0e5252a
                   Version: 47.83.202102090044-0 (2021-02-09T00:47:36Z)
sh-4.4# cat << EOF > ~/.toolboxrc
> IMAGE=fedora:latest
> REGISTRY=docker.io
> EOF
sh-4.4# 
sh-4.4# toolbox
.toolboxrc file detected, overriding defaults...
Trying to pull docker.io/library/fedora:latest...
Getting image source signatures
Copying blob 3856270ab03a done  
Copying config 33c4a622f3 done  
Writing manifest to image destination
Storing signatures
33c4a622f37cfad474fd76ecd3e9c04e6434bd9a1668130537bd814ddbdce7a5
Spawning a container 'toolbox-root' with image 'docker.io/fedora:latest'
[root@toolbox /]# exit
sh-4.4# exit
exit
sh-4.2# exit
exit

Removing debug pod ...

$ oc debug node/ip-10-0-196-203.us-west-2.compute.internal -- chroot /host rpm -qa toolbox
Starting pod/ip-10-0-196-203us-west-2computeinternal-debug ...
To use host binaries, run `chroot /host`
toolbox-0.0.8-3.rhaos4.8.el8.noarch

Comment 15 errata-xmlrpc 2021-07-27 22:37:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438


Note You need to log in before you can comment on or make changes to this bug.