Description of problem: Executing dnf update and dnf install PACKAGE in a fedora:34 container returns this error: The futex facility returned an unexpected error code, Aborted (core dumped). Version-Release number of selected component (if applicable): container version: fedora:34 (and fedora:33, fedora:32 still works!) How reproducible: Steps to Reproduce: 1. run a Fedora 34 container interactive: [root@fed178 raspberryfan]# podman run -it registry.fedoraproject.org/fedora:34 /bin/bash Trying to pull registry.fedoraproject.org/fedora:34... Getting image source signatures Copying blob bf82a539f89a done Copying config 8f324a4e16 done Writing manifest to image destination Storing signatures [root@18204df249d2 /]# dnf update Actual results: 1. run dnf from the container commandline: [root@18204df249d2 /]# dnf update Fedora 34 openh264 (From Cisco) - armhfp 1.4 kB/s | 2.5 kB 00:01 The futex facility returned an unexpected error code. [ === ] --- B/s | 0 B --:-- ETA Aborted (core dumped) [root@18204df249d2 /]# 2. check the exit code: [root@18204df249d2 /]# echo $? 134 3. same rusult while running dnf install: [root@c3f56c685066 /]# dnf install wget The futex facility returned an unexpected error code. [=== ] --- B/s | 0 B --:-- ETA Aborted (core dumped) Expected results: Note: I have used a Fedora:32 container to show the results that I expected: 1. Run dnf update in a Fedora:32 container, so it will check if there are any packges to update: [root@fed178 raspberryfan]# podman run -it registry.fedoraproject.org/fedora:32 /bin/bash [root@fae76fa7e939 /]# dnf update Fedora 32 openh264 (From Cisco) - armhfp 1.2 kB/s | 2.6 kB 00:02 Fedora Modular 32 - armhfp 1.2 MB/s | 4.4 MB 00:03 Fedora Modular 32 - armhfp - Updates 1.1 MB/s | 4.1 MB 00:03 Fedora 32 - armhfp - Updates 1.2 MB/s | 26 MB 00:20 Fedora 32 - armhfp 1.8 MB/s | 65 MB 00:36 Dependencies resolved. Nothing to do. Complete! [root@fae76fa7e939 /]# 2. Run dnf install wget in a Fedora:32 container, so it will install the package: [root@fae76fa7e939 /]# dnf install wget Last metadata expiration check: 0:09:11 ago on Sat May 8 06:43:55 2021. Dependencies resolved. =========================================================================== Package Architecture Version Repository Size =========================================================================== Installing: wget armv7hl 1.21.1-2.fc32 updates 812 k Transaction Summary =========================================================================== Install 1 Package Total download size: 812 k Installed size: 3.1 M Is this ok [y/N]: y Downloading Packages: wget-1.21.1-2.fc32.armv7hl.rpm 2.5 MB/s | 812 kB 00:00 -------------------------------------------------------------------------- Total 790 kB/s | 812 kB 00:01 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : wget-1.21.1-2.fc32.armv7hl 1/1 Running scriptlet: wget-1.21.1-2.fc32.armv7hl 1/1 Verifying : wget-1.21.1-2.fc32.armv7hl 1/1 Installed: wget-1.21.1-2.fc32.armv7hl Complete! [root@fae76fa7e939 /]# Additional info: 1. It is important to note that this issue did not occurred in previous versions of the Fedora:34 (and Fedora:33), until it was updated about a week ago. 2. Installed packages in the Fedora:34 container: [root@18204df249d2 /]# dnf list installed Installed Packages alternatives.armv7hl 1.15-2.fc34 @anaconda audit-libs.armv7hl 3.0.1-2.fc34 @anaconda basesystem.noarch 11-11.fc34 @anaconda bash.armv7hl 5.1.0-2.fc34 @anaconda bzip2-libs.armv7hl 1.0.8-6.fc34 @anaconda ca-certificates.noarch 2020.2.41-7.fc34 @anaconda coreutils.armv7hl 8.32-21.fc34 @anaconda coreutils-common.armv7hl 8.32-21.fc34 @anaconda cracklib.armv7hl 2.9.6-25.fc34 @anaconda crypto-policies.noarch 20210213-1.git5c710c0.fc34 @anaconda curl.armv7hl 7.76.0-1.fc34 @anaconda cyrus-sasl-lib.armv7hl 2.1.27-8.fc34 @anaconda dejavu-sans-fonts.noarch 2.37-16.fc34 @anaconda dnf.noarch 4.6.1-1.fc34 @anaconda dnf-data.noarch 4.6.1-1.fc34 @anaconda elfutils-default-yama-scope.noarch 0.183-1.fc34 @anaconda elfutils-libelf.armv7hl 0.183-1.fc34 @anaconda elfutils-libs.armv7hl 0.183-1.fc34 @anaconda expat.armv7hl 2.2.10-2.fc34 @anaconda fedora-gpg-keys.noarch 34-1 @anaconda fedora-release-common.noarch 34-1 @anaconda fedora-release-container.noarch 34-1 @anaconda fedora-release-identity-container.noarch 34-1 @anaconda fedora-repos.noarch 34-1 @anaconda fedora-repos-modular.noarch 34-1 @anaconda file-libs.armv7hl 5.39-5.fc34 @anaconda filesystem.armv7hl 3.14-5.fc34 @anaconda fonts-filesystem.noarch 1:2.0.5-5.fc34 @anaconda gawk.armv7hl 5.1.0-3.fc34 @anaconda gdbm-libs.armv7hl 1:1.19-2.fc34 @anaconda glib2.armv7hl 2.68.1-1.fc34 @anaconda glibc.armv7hl 2.33-5.fc34 @anaconda glibc-common.armv7hl 2.33-5.fc34 @anaconda glibc-minimal-langpack.armv7hl 2.33-5.fc34 @anaconda gmp.armv7hl 1:6.2.0-6.fc34 @anaconda gnupg2.armv7hl 2.2.27-4.fc34 @anaconda gnutls.armv7hl 3.7.1-2.fc34 @anaconda gpgme.armv7hl 1.15.1-2.fc34 @anaconda grep.armv7hl 3.6-2.fc34 @anaconda gzip.armv7hl 1.10-4.fc34 @anaconda ima-evm-utils.armv7hl 1.3.2-2.fc34 @anaconda json-c.armv7hl 0.14-8.fc34 @anaconda keyutils-libs.armv7hl 1.6.1-2.fc34 @anaconda krb5-libs.armv7hl 1.19.1-3.fc34 @anaconda langpacks-core-en_GB.noarch 3.0-14.fc34 @anaconda langpacks-core-font-en.noarch 3.0-14.fc34 @anaconda langpacks-en_GB.noarch 3.0-14.fc34 @anaconda libacl.armv7hl 2.3.1-1.fc34 @anaconda libarchive.armv7hl 3.5.1-2.fc34 @anaconda libassuan.armv7hl 2.5.5-1.fc34 @anaconda libattr.armv7hl 2.5.1-1.fc34 @anaconda libblkid.armv7hl 2.36.2-1.fc34 @anaconda libbrotli.armv7hl 1.0.9-4.fc34 @anaconda libcap.armv7hl 2.48-2.fc34 @anaconda libcap-ng.armv7hl 0.8.2-4.fc34 @anaconda libcom_err.armv7hl 1.45.6-5.fc34 @anaconda libcomps.armv7hl 0.1.15-6.fc34 @anaconda libcurl.armv7hl 7.76.0-1.fc34 @anaconda libdb.armv7hl 5.3.28-46.fc34 @anaconda libdnf.armv7hl 0.60.0-1.fc34 @anaconda libeconf.armv7hl 0.3.8-5.fc34 @anaconda libfdisk.armv7hl 2.36.2-1.fc34 @anaconda libffi.armv7hl 3.1-28.fc34 @anaconda libgcc.armv7hl 11.0.1-0.3.fc34 @anaconda libgcrypt.armv7hl 1.9.2-2.fc34 @anaconda libgomp.armv7hl 11.0.1-0.3.fc34 @anaconda libgpg-error.armv7hl 1.42-1.fc34 @anaconda libidn2.armv7hl 2.3.0-5.fc34 @anaconda libksba.armv7hl 1.5.0-2.fc34 @anaconda libmetalink.armv7hl 0.1.3-14.fc34 @anaconda libmodulemd.armv7hl 2.12.0-2.fc34 @anaconda libmount.armv7hl 2.36.2-1.fc34 @anaconda libnghttp2.armv7hl 1.43.0-2.fc34 @anaconda libnsl2.armv7hl 1.3.0-2.fc34 @anaconda libpsl.armv7hl 0.21.1-3.fc34 @anaconda libpwquality.armv7hl 1.4.4-2.fc34 @anaconda librepo.armv7hl 1.13.0-1.fc34 @anaconda libreport-filesystem.noarch 2.14.0-17.fc34 @anaconda libselinux.armv7hl 3.2-1.fc34 @anaconda libsemanage.armv7hl 3.2-1.fc34 @anaconda libsepol.armv7hl 3.2-1.fc34 @anaconda libsigsegv.armv7hl 2.13-2.fc34 @anaconda libsmartcols.armv7hl 2.36.2-1.fc34 @anaconda libsolv.armv7hl 0.7.17-3.fc34 @anaconda libssh.armv7hl 0.9.5-2.fc34 @anaconda libssh-config.noarch 0.9.5-2.fc34 @anaconda libsss_idmap.armv7hl 2.4.2-3.fc34 @anaconda libsss_nss_idmap.armv7hl 2.4.2-3.fc34 @anaconda libstdc++.armv7hl 11.0.1-0.3.fc34 @anaconda libtasn1.armv7hl 4.16.0-4.fc34 @anaconda libtirpc.armv7hl 1.3.1-1.fc34 @anaconda libunistring.armv7hl 0.9.10-10.fc34 @anaconda libusbx.armv7hl 1.0.24-2.fc34 @anaconda libutempter.armv7hl 1.2.1-4.fc34 @anaconda libuuid.armv7hl 2.36.2-1.fc34 @anaconda libverto.armv7hl 0.3.2-1.fc34 @anaconda libxcrypt.armv7hl 4.4.18-1.fc34 @anaconda libxml2.armv7hl 2.9.10-10.fc34 @anaconda libyaml.armv7hl 0.2.5-5.fc34 @anaconda libzstd.armv7hl 1.4.9-1.fc34 @anaconda lua-libs.armv7hl 5.4.2-2.fc34 @anaconda lz4-libs.armv7hl 1.9.3-2.fc34 @anaconda mpfr.armv7hl 4.1.0-5.fc34 @anaconda ncurses-base.noarch 6.2-4.20200222.fc34 @anaconda ncurses-libs.armv7hl 6.2-4.20200222.fc34 @anaconda nettle.armv7hl 3.7.2-1.fc34 @anaconda npth.armv7hl 1.6-6.fc34 @anaconda openldap.armv7hl 2.4.57-2.fc34 @anaconda openssl-libs.armv7hl 1:1.1.1k-1.fc34 @anaconda p11-kit.armv7hl 0.23.22-3.fc34 @anaconda p11-kit-trust.armv7hl 0.23.22-3.fc34 @anaconda pam.armv7hl 1.5.1-5.fc34 @anaconda pcre.armv7hl 8.44-3.fc34.1 @anaconda pcre2.armv7hl 10.36-4.fc34 @anaconda pcre2-syntax.noarch 10.36-4.fc34 @anaconda popt.armv7hl 1.18-4.fc34 @anaconda publicsuffix-list-dafsa.noarch 20190417-5.fc34 @anaconda python-pip-wheel.noarch 21.0.1-2.fc34 @anaconda python-setuptools-wheel.noarch 53.0.0-1.fc34 @anaconda python3.armv7hl 3.9.2-1.fc34 @anaconda python3-dnf.noarch 4.6.1-1.fc34 @anaconda python3-gpg.armv7hl 1.15.1-2.fc34 @anaconda python3-hawkey.armv7hl 0.60.0-1.fc34 @anaconda python3-libcomps.armv7hl 0.1.15-6.fc34 @anaconda python3-libdnf.armv7hl 0.60.0-1.fc34 @anaconda python3-libs.armv7hl 3.9.2-1.fc34 @anaconda python3-rpm.armv7hl 4.16.1.3-1.fc34 @anaconda readline.armv7hl 8.1-2.fc34 @anaconda rootfiles.noarch 8.1-29.fc34 @anaconda rpm.armv7hl 4.16.1.3-1.fc34 @anaconda rpm-build-libs.armv7hl 4.16.1.3-1.fc34 @anaconda rpm-libs.armv7hl 4.16.1.3-1.fc34 @anaconda rpm-sign-libs.armv7hl 4.16.1.3-1.fc34 @anaconda sed.armv7hl 4.8-7.fc34 @anaconda setup.noarch 2.13.7-3.fc34 @anaconda shadow-utils.armv7hl 2:4.8.1-7.fc34 @anaconda sqlite-libs.armv7hl 3.34.1-2.fc34 @anaconda sssd-client.armv7hl 2.4.2-3.fc34 @anaconda sudo.armv7hl 1.9.5p2-1.fc34 @anaconda systemd-libs.armv7hl 248-2.fc34 @anaconda tar.armv7hl 2:1.34-1.fc34 @anaconda tpm2-tss.armv7hl 3.0.3-2.fc34 @anaconda tzdata.noarch 2021a-1.fc34 @anaconda util-linux.armv7hl 2.36.2-1.fc34 @anaconda vim-minimal.armv7hl 2:8.2.2637-1.fc34 @anaconda xz-libs.armv7hl 5.2.5-5.fc34 @anaconda yum.noarch 4.6.1-1.fc34 @anaconda zchunk-libs.armv7hl 1.1.9-2.fc34 @anaconda zlib.armv7hl 1.2.11-26.fc34 @anaconda
Created attachment 1781496 [details] journalctl Added journalctl. The dnf core dump looks related to /usr/lib/libc-2.33.so
Seems more related to dnf than the container image itself. Moving to the dnf component for investigation
I've tried the reproducer on x86_64 and didn't reproduce. Ted, you don't mention it, is this only reproducible on armv7? If so, could we perhaps try to bisect which package upgrade is causing this in the image? Seems like a good way to narrow this down. I assume rebuilding the image is required for this? Clement? Ted, is it possible to provide full backtrace from the coredump?
Yes, this is only on armv7 (Raspberry 2B). On aarch64 and adm64 (x86_64) it works fine. Full backtrace from the coredump. Is this were you looking for? [root@fed157 ~]# coredumpctl list --since=today TIME PID UID GID SIG COREFILE EXE SIZE Tue 2021-05-11 06:52:42 CEST 29515 0 0 SIGABRT present /usr/bin/python3.9 4.5M [root@fed157 ~]# coredumpctl info 29515 PID: 29515 (dnf) UID: 0 (root) GID: 0 (root) Signal: 6 (ABRT) Timestamp: Tue 2021-05-11 06:52:39 CEST (6h ago) Command Line: /usr/bin/python3 /usr/bin/dnf update Executable: /usr/bin/python3.9 Control Group: /machine.slice/libpod-1f0b143b821f203098e9c9809e86c5762e7f0c02d44cc312481dd1aa51ae31a8.scope/container Unit: libpod-1f0b143b821f203098e9c9809e86c5762e7f0c02d44cc312481dd1aa51ae31a8.scope Slice: machine.slice Boot ID: 9471c84820ac4a2ea7fec8ddd428ed84 Machine ID: 86fe2a8ab05c43fbaf7baa408b3ea19a Hostname: 1f0b143b821f Storage: /var/lib/systemd/coredump/core.dnf.0.9471c84820ac4a2ea7fec8ddd428ed84.29515.1620708759000000.zst (present) Disk Size: 4.5M Message: Process 29515 (dnf) of user 0 dumped core. I will attach core.dnf.0.9471c84820ac4a2ea7fec8ddd428ed84.29515.1620708759000000.zst
Created attachment 1781994 [details] Coredump /var/lib/systemd/coredump/core.dnf.0.9471c84820ac4a2ea7fec8ddd428ed84.29515.1620708759000000.zst
That is not useful. I'm not sure if the .zst would work at all, but I'm sure I wouldn't be able to load the coredump on a different architecture anyway. To print the backtrace, run: coredumpctl debug 29515 Youll need gdb installed for that, it should open the gdb console. Then type: backtrace full And paste the full output of that, please.
Created attachment 1782073 [details] coredump dnf I installed gdb and run coredumpctl debug 29515 Next I installed over 200 gdb dependency packages, as recommended in the gdb console. Then I run coredumpctl debug 29515 again and saved all the output in coredump-dnf.txt I hope it is useful, but I have my doubts ;-)
Thanks. The backtrace is not useless, though it doesn't point to the culprit. Pasting the relevant part: #0 0xb6b810d4 in __libc_signal_restore_set (set=0xbec8096c) at ../sysdeps/unix/sysv/linux/internal-signals.h:105 _a1 = 0 _nr = 175 _a3tmp = 0 _a1tmp = 2 _a3 = 0 _a4tmp = 8 _a2tmp = -1094186636 _a2 = -1094186636 _a4 = 8 #1 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:47 set = {__val = {0, 0, 3014407344, 0, 3067418392, 3014407344, 3011337064, 1, 3067293352, 0, 972469504, 3067415732, 3013791856, 3011295096, 3067423660, 3011295096, 0, 3010864904, 3010864904, 3010865244, 3014440454, 20697120, 3010865224, 3010864904, 20700944, 12, 3067353600, 3069561466, 0, 20697172, 3010865212, 20700944}} pid = <optimized out> tid = <optimized out> ret = 0 #2 0xb6b69fbc in __GI_abort () at abort.c:79 save_stage = 1 act = {__sigaction_handler = {sa_handler = 0x1645ea0, sa_sigaction = 0x1645ea0}, sa_mask = {__val = {23125480, 2980055192, 3200781172, 80, 972469504, 0, 23350660, 3200781172, 24649648, 23147584, 3042883392, 23147584, 30, 23125480, 2980055192, 3200781172, 1, 0, 3042885912, 23125480, 80, 23350592, 3200781236, 274877907, 0, 23125480, 1868850534, 1831756146, 1869771369, 1768697458, 1702061428, 1919252082}}, sa_flags = 980184622, sa_restorer = 0x1003038} sigs = {__val = {32, 0, 0, 20700944, 3010864904, 3067350800, 20700944, 3010864904, 3063890032, 3200781128, 0, 3067419296, 0, 3200781128, 12, 3014407344, 0, 3067418392, 3014407344, 3067282508, 3069445700, 3, 3200781104, 3067315232, 0, 159096470, 3200781048, 3068762576, 3013792648, 972469504, 3015454144, 0}} #3 0xb6bbbd88 in fmemopen_seek (cookie=0x0, p=0xbec80974, w=<optimized out>) at fmemopen.c:112 np = <optimized out> c = 0x0 Backtrace stopped: previous frame identical to this frame (corrupt stack?) I haven't seen this before, it seems like badly corrupted memory (assuming the core dump is not damaged, but the top few frames already look very suspicious). The bug can be anywhere, dnf, anything in the container or even the host. I'm afraid given this is on arm, my options to help are limited. Can you try bisecting the RPM versions that changed between the two images (a working one and the broken one) by starting with the good one and upgrading to the versions of the broken image gradually. If you find installing newer versions of some RPMs introduces this crash, it's likely a bug in those. If not, it'll be a bug somewhere in the image creation?
Thanks for your update. I am able to reproduced this issue on a fresh fedora34 OS (an other raspberry pi 2B, armv7hl) with fresh fedora34 armv7hl container image (created April 24, 2021). I experienced the same dnf update coredump issue. Previous versions of the fedora:34 container image are not available anymore, see https://registry.fedoraproject.org/repo/fedora/tags/ Not sure how to continue with this. Is it possible to re-direct this issue to the team that is responsible for the fedora:34 container image?
(In reply to Ted Sluis from comment #9) > Not sure how to continue with this. Is it possible to re-direct this issue > to the team that is responsible for the fedora:34 container image? Good question. I tried to reassign back to the fedora-container-image component, but I don't see it on the list. Clement, can you take a look at this? The easiest way to narrow this down seems to be to bisect which rpm upgrade broke the image.
Reassigned to fedora-container-image. All of our image builds are available in the buildsystem --> https://koji.fedoraproject.org/koji/packageinfo?packageID=26387 I can try to push a new update and see if that fix the issue, otherwise yes we can try to test older images to identify the root cause of the problem
Just pushed a new fedora:34 image can you give it a try [cverna@localhost] $ podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.fedoraproject.org/fedora 34 3567369c6711 4 hours ago 184 MB
I'm getting this as well. The most recent version I've tried is the armhfp variant of Fedora-Container-Base-34-20210616.0 from the link in comment 11.
This is likely 32 bit related. I was trying to debug a 32 bit LLVM OOM in https://bugzilla.redhat.com/show_bug.cgi?id=1974927 Now if mock/koji just used podman/kubernetes directly, there would already be a 32 bit container to pull. But since there isn't I hacked up my own and pushed to quay.io/cgwalters/fedora-i686:34 from the current snapshot of https://kojipkgs.fedoraproject.org/repos/f34-build/latest/i386/
And it was then really easy to debug this, a quick strace from outside the container shows: 272619 futex_time64(0xf4cf0b28, FUTEX_WAIT_BITSET|FUTEX_CLOCK_REALTIME, 6, NULL, FUTEX_BITSET_MATCH_ANY <unfinished ...> 272619 <... futex_time64 resumed>) = -1 EPERM (Operation not permitted) Which, "EPERM from random system calls" then immediately triggers my seccomp scars, and yep, $ podman run --security-opt seccomp=unconfined --rm -ti quay.io/cgwalters/fedora-i686:34 setarch i686 bash # dnf -y install cargo Then works fine. So...probably the podman seccomp filter needs to be updated to allow futex_time64.
As for why this started just recently, I suspect glibc was updated to assume the system call exists and works. Perhaps https://github.com/bminor/glibc/commit/a3e7aead03d558e77fc8b9dc4d567b7bb8619545 ?
Temporarily reassigning to glibc as a way to get them CC'd and discuss whether glibc needs to at least temporarily revert the use of the syscall.
(Right sorry, the seccomp policy is actually in containers-common)
(In reply to Colin Walters from comment #18) > Temporarily reassigning to glibc as a way to get them CC'd and discuss > whether glibc needs to at least temporarily revert the use of the syscall. So what's podman's position here? Will you apply the runc kludge (generic ENOSYS handling), or will we always be fighting these EPERM errors? If the former, what needs to happen before you can roll out generic ENOSYS handling?
New koji builds with updated seccomp.json can be found here: f33: https://koji.fedoraproject.org/koji/buildinfo?buildID=1775245 f34: https://koji.fedoraproject.org/koji/buildinfo?buildID=1775182 I haven't yet submitted these to bodhi because I plan to add these together with podman v3.2.2 scheduled for release tomorrow.
I think this is giuseppes call.
futex_time64 is permitted now with: https://github.com/containers/common/pull/597 (In reply to Florian Weimer from comment #21) > So what's podman's position here? Will you apply the runc kludge (generic > ENOSYS handling), or will we always be fighting these EPERM errors? > > If the former, what needs to happen before you can roll out generic ENOSYS > handling? the issue is also fixed in containers/common. We switched the default to ENOSYS instead of EPERM.
FEDORA-2021-bc6a62a2c5 has been submitted as an update to Fedora 34. https://bodhi.fedoraproject.org/updates/FEDORA-2021-bc6a62a2c5
FEDORA-2021-0c53d8738d has been submitted as an update to Fedora 33. https://bodhi.fedoraproject.org/updates/FEDORA-2021-0c53d8738d
FEDORA-2021-bc6a62a2c5 has been pushed to the Fedora 34 testing repository. Soon you'll be able to install the update with the following command: `sudo dnf upgrade --enablerepo=updates-testing --advisory=FEDORA-2021-bc6a62a2c5` You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2021-bc6a62a2c5 See also https://fedoraproject.org/wiki/QA:Updates_Testing for more information on how to test updates.
FEDORA-2021-0c53d8738d has been pushed to the Fedora 33 testing repository. Soon you'll be able to install the update with the following command: `sudo dnf upgrade --enablerepo=updates-testing --advisory=FEDORA-2021-0c53d8738d` You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2021-0c53d8738d See also https://fedoraproject.org/wiki/QA:Updates_Testing for more information on how to test updates.
FEDORA-2021-bc6a62a2c5 has been pushed to the Fedora 34 stable repository. If problem still persists, please make note of it in this bug report.
FEDORA-2021-0c53d8738d has been pushed to the Fedora 33 stable repository. If problem still persists, please make note of it in this bug report.