Bug 1888494 - imagepruner pod is error when image registry storage is not configured
Summary: imagepruner pod is error when image registry storage is not configured
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Image Registry
Version: 4.6
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: ---
: 4.7.0
Assignee: Oleg Bulatov
QA Contact: Wenjing Zheng
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-10-15 02:22 UTC by Wenjing Zheng
Modified: 2021-02-24 15:26 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: the pruner were trying to detect the registry name using image streams Consequence: when there were no image streams, the pruner failed to detect the registry name Fix: the image registry operator provides the pruner with the registry name if the registry is configured or disables the registry pruning if the registry is not installed. Result: the pruner does not depend on existence of image streams
Clone Of:
Environment:
Last Closed: 2021-02-24 15:26:15 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Image registry operator log (1.41 MB, text/plain)
2020-10-15 02:22 UTC, Wenjing Zheng
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:26:32 UTC

Description Wenjing Zheng 2020-10-15 02:22:34 UTC
Created attachment 1721669 [details]
Image registry operator log

Description of problem:
If not configure image registry storage, imagepruner pod is error with below errors:
$ oc logs pods/image-pruner-1602720000-vknwb
F1015 00:00:15.197481       1 helpers.go:115] error: unable to find the remote registry host: no managed image found
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc000012001, 0xc0001584e0, 0x75, 0xc6)
	/go/src/github.com/openshift/oc/vendor/k8s.io/klog/v2/klog.go:996 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x4d2aec0, 0xc000000003, 0x0, 0x0, 0xc0007c40e0, 0x48a9af4, 0xa, 0x73, 0x41d400)
	/go/src/github.com/openshift/oc/vendor/k8s.io/klog/v2/klog.go:945 +0x191
k8s.io/klog/v2.(*loggingT).printDepth(0x4d2aec0, 0x3, 0x0, 0x0, 0x2, 0xc002917738, 0x1, 0x1)
	/go/src/github.com/openshift/oc/vendor/k8s.io/klog/v2/klog.go:718 +0x165
k8s.io/klog/v2.FatalDepth(...)
	/go/src/github.com/openshift/oc/vendor/k8s.io/klog/v2/klog.go:1449
k8s.io/kubectl/pkg/cmd/util.fatal(0xc000da6820, 0x46, 0x1)
	/go/src/github.com/openshift/oc/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x1f0
k8s.io/kubectl/pkg/cmd/util.checkErr(0x347a520, 0xc000d99dc0, 0x31c65e8)
	/go/src/github.com/openshift/oc/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:188 +0x945
k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/go/src/github.com/openshift/oc/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
github.com/openshift/oc/pkg/cli/admin/prune/images.NewCmdPruneImages.func1(0xc0017058c0, 0xc001531e30, 0x0, 0x7)
	/go/src/github.com/openshift/oc/pkg/cli/admin/prune/images/images.go:161 +0x1c5
github.com/spf13/cobra.(*Command).execute(0xc0017058c0, 0xc001531dc0, 0x7, 0x7, 0xc0017058c0, 0xc001531dc0)
	/go/src/github.com/openshift/oc/vendor/github.com/spf13/cobra/command.go:846 +0x2c2
github.com/spf13/cobra.(*Command).ExecuteC(0xc000bcb600, 0x2, 0xc000bcb600, 0x2)
	/go/src/github.com/openshift/oc/vendor/github.com/spf13/cobra/command.go:950 +0x375
github.com/spf13/cobra.(*Command).Execute(...)
	/go/src/github.com/openshift/oc/vendor/github.com/spf13/cobra/command.go:887
main.main()
	/go/src/github.com/openshift/oc/cmd/oc/oc.go:110 +0x885

goroutine 6 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x4d2aec0)
	/go/src/github.com/openshift/oc/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
created by k8s.io/klog/v2.init.0
	/go/src/github.com/openshift/oc/vendor/k8s.io/klog/v2/klog.go:416 +0xd8

goroutine 36 [chan receive]:
k8s.io/klog.(*loggingT).flushDaemon(0x4d2ade0)
	/go/src/github.com/openshift/oc/vendor/k8s.io/klog/klog.go:1010 +0x8b
created by k8s.io/klog.init.0
	/go/src/github.com/openshift/oc/vendor/k8s.io/klog/klog.go:411 +0xd8

goroutine 13 [select]:
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x31c6510, 0x347e280, 0xc001178000, 0x1, 0xc0000aa360)
	/go/src/github.com/openshift/oc/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x31c6510, 0x12a05f200, 0x0, 0xc0006a8101, 0xc0000aa360)
	/go/src/github.com/openshift/oc/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/go/src/github.com/openshift/oc/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/apimachinery/pkg/util/wait.Forever(0x31c6510, 0x12a05f200)
	/go/src/github.com/openshift/oc/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:81 +0x4f
created by k8s.io/component-base/logs.InitLogs
	/go/src/github.com/openshift/oc/vendor/k8s.io/component-base/logs/logs.go:58 +0x8a

goroutine 14 [select]:
io.(*pipe).Read(0xc0015dd3e0, 0xc000e2a000, 0x1000, 0x1000, 0x2aa4860, 0x1, 0xc000e2a000)
	/opt/rh/go-toolset-1.15/root/usr/lib/go-toolset-1.15-golang/src/io/pipe.go:57 +0xe7
io.(*PipeReader).Read(0xc00050a910, 0xc000e2a000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/opt/rh/go-toolset-1.15/root/usr/lib/go-toolset-1.15-golang/src/io/pipe.go:134 +0x4c
bufio.(*Scanner).Scan(0xc00165d800, 0x0)
	/opt/rh/go-toolset-1.15/root/usr/lib/go-toolset-1.15-golang/src/bufio/scan.go:214 +0xa9
github.com/openshift/oc/pkg/cli/admin/mustgather.newPrefixWriter.func1(0xc00165d800, 0x347fca0, 0xc000012018, 0x2fdfe6d, 0x17)
	/go/src/github.com/openshift/oc/pkg/cli/admin/mustgather/mustgather.go:424 +0x13e
created by github.com/openshift/oc/pkg/cli/admin/mustgather.newPrefixWriter
	/go/src/github.com/openshift/oc/pkg/cli/admin/mustgather/mustgather.go:423 +0x1d0

goroutine 67 [IO wait]:
internal/poll.runtime_pollWait(0x7f37a02ca1d8, 0x72, 0x3482f20)
	/opt/rh/go-toolset-1.15/root/usr/lib/go-toolset-1.15-golang/src/runtime/netpoll.go:220 +0x55
internal/poll.(*pollDesc).wait(0xc001799718, 0x72, 0xc001b00000, 0xfa7e, 0xfa7e)
	/opt/rh/go-toolset-1.15/root/usr/lib/go-toolset-1.15-golang/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
	/opt/rh/go-toolset-1.15/root/usr/lib/go-toolset-1.15-golang/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc001799700, 0xc001b00000, 0xfa7e, 0xfa7e, 0x0, 0x0, 0x0)
	/opt/rh/go-toolset-1.15/root/usr/lib/go-toolset-1.15-golang/src/internal/poll/fd_unix.go:159 +0x1b1
net.(*netFD).Read(0xc001799700, 0xc001b00000, 0xfa7e, 0xfa7e, 0x203000, 0x78699b, 0xc000e36160)
	/opt/rh/go-toolset-1.15/root/usr/lib/go-toolset-1.15-golang/src/net/fd_posix.go:55 +0x4f
net.(*conn).Read(0xc0001147d0, 0xc001b00000, 0xfa7e, 0xfa7e, 0x0, 0x0, 0x0)
	/opt/rh/go-toolset-1.15/root/usr/lib/go-toolset-1.15-golang/src/net/net.go:182 +0x8e
crypto/tls.(*atLeastReader).Read(0xc0004881a0, 0xc001b00000, 0xfa7e, 0xfa7e, 0xb7, 0xfa36, 0xc0018eb710)
	/opt/rh/go-toolset-1.15/root/usr/lib/go-toolset-1.15-golang/src/crypto/tls/conn.go:779 +0x62
bytes.(*Buffer).ReadFrom(0xc000e36280, 0x347a260, 0xc0004881a0, 0x41d785, 0x2bdcba0, 0x2ed2dc0)
	/opt/rh/go-toolset-1.15/root/usr/lib/go-toolset-1.15-golang/src/bytes/buffer.go:204 +0xb1
crypto/tls.(*Conn).readFromUntil(0xc000e36000, 0x347f7c0, 0xc0001147d0, 0x5, 0xc0001147d0, 0xa6)
	/opt/rh/go-toolset-1.15/root/usr/lib/go-toolset-1.15-golang/src/crypto/tls/conn.go:801 +0xf3
crypto/tls.(*Conn).readRecordOrCCS(0xc000e36000, 0x0, 0x0, 0xc0018ebd18)
	/opt/rh/go-toolset-1.15/root/usr/lib/go-toolset-1.15-golang/src/crypto/tls/conn.go:608 +0x115
crypto/tls.(*Conn).readRecord(...)
	/opt/rh/go-toolset-1.15/root/usr/lib/go-toolset-1.15-golang/src/crypto/tls/conn.go:576
crypto/tls.(*Conn).Read(0xc000e36000, 0xc001928000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/opt/rh/go-toolset-1.15/root/usr/lib/go-toolset-1.15-golang/src/crypto/tls/conn.go:1252 +0x15f
bufio.(*Reader).Read(0xc00093cc60, 0xc0005d68f8, 0x9, 0x9, 0xc0018ebd18, 0x31c9f00, 0x9bbceb)
	/opt/rh/go-toolset-1.15/root/usr/lib/go-toolset-1.15-golang/src/bufio/bufio.go:227 +0x222
io.ReadAtLeast(0x347a000, 0xc00093cc60, 0xc0005d68f8, 0x9, 0x9, 0x9, 0xc00007a060, 0x0, 0x347a520)
	/opt/rh/go-toolset-1.15/root/usr/lib/go-toolset-1.15-golang/src/io/io.go:314 +0x87
io.ReadFull(...)
	/opt/rh/go-toolset-1.15/root/usr/lib/go-toolset-1.15-golang/src/io/io.go:333
golang.org/x/net/http2.readFrameHeader(0xc0005d68f8, 0x9, 0x9, 0x347a000, 0xc00093cc60, 0x0, 0x0, 0xc0014aa180, 0x0)
	/go/src/github.com/openshift/oc/vendor/golang.org/x/net/http2/frame.go:237 +0x89
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0005d68c0, 0xc0014aa180, 0x0, 0x0, 0x0)
	/go/src/github.com/openshift/oc/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0018ebfa8, 0x0, 0x0)
	/go/src/github.com/openshift/oc/vendor/golang.org/x/net/http2/transport.go:1794 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001926000)
	/go/src/github.com/openshift/oc/vendor/golang.org/x/net/http2/transport.go:1716 +0x6f
created by golang.org/x/net/http2.(*Transport).newClientConn
	/go/src/github.com/openshift/oc/vendor/golang.org/x/net/http2/transport.go:695 +0x66e

$oc describe co image-registry
Status:
  Conditions:
    Last Transition Time:  2020-10-15T01:56:48Z
    Message:               Available: Error: storage backend not configured
ImagePrunerAvailable: Pruner CronJob has been created
    Reason:                StorageNotConfigured
    Status:                False
    Type:                  Available
    Last Transition Time:  2020-10-15T01:56:00Z
    Message:               Progressing: Unable to apply resources: storage backend not configured
    Reason:                Error
    Status:                False
    Type:                  Progressing
    Last Transition Time:  2020-10-15T01:56:48Z
    Message:               Degraded: Error: storage backend not configured
ImagePrunerDegraded: Job has reached the specified backoff limit
    Reason:   ImagePrunerJobFailed::StorageNotConfigured
    Status:   True
    Type:     Degraded


Version-Release number of selected component (if applicable):
4.6.0-rc.4 

How reproducible:
Always

Steps to Reproduce:
1.Install a cluster and don't configure image registry storage for it
2.Check imagepruner pod state 
3.check image registry operator state

Actual results:
imagepruner pod is error;

Expected results:
imagepruner pod can have error with clear warning and can be normal when image regitry storage is configured.

Additional info:
Workaround:
a.configure image registry storage; 
b.when image registry is back, need to re-schedule imagepruner job to be running with a successful state;
Then image registry will be available with no error.

Comment 1 Stephen Cuppett 2020-10-15 11:38:15 UTC
Setting target release to the active development branch (4.7.0). For any fixes, where required and requested, cloned BZs will be created for those release maintenance streams where appropriate once they are identified.

Comment 2 Oleg Bulatov 2020-10-22 11:37:41 UTC
StorageNotConfigured is an abnormal state, it's expected that the pruner might be misbehaving. Lowering the severity.

Comment 5 Wenjing Zheng 2020-11-12 10:11:01 UTC
Verified on 4.7.0-0.nightly-2020-11-12-001200.

Comment 8 errata-xmlrpc 2021-02-24 15:26:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.