Created attachment 1592964 [details]
coredumpctl gdb output for dnf-makecache.service abort using bt then thread apply all bt full
Description of problem:
dnf-makecache.service ran automatically in the background with the command line /usr/bin/python3 /usr/bin/dnf makecache --timer
coredumpctl showed that dnf makecache --timer aborted. sudo coredumpctl gdb showed that libdnf::File::NotOpenedException occurred in #5 of the trace in __cxxabiv1::__cxa_throw. Many errors like "Cannot access memory at address 0x3"
occurred an integer overflow with message like
"nevraTuple = Python Exception <class 'OverflowError'> int too big to convert: "
#9 in (anonymous namespace)::readModuleMetadataFromRepo at /usr/src/debug/libdnf-0.35.1-2.fc30.x86_64/libdnf/dnf-sack.cpp:2386
#10 in dnf_sack_filter_modules_v2[abi:cxx11] at /usr/src/debug/libdnf-0.35.1-2.fc30.x86_64/libdnf/dnf-sack.cpp:2386
Similar errors like "Cannot access memory at address 0x6e00000077" were in #6 in libdnf::CompressedFile::getContent[abi:cxx11]() (this=0x55fc78eb81a0) at /usr/src/debug/libdnf-0.35.1-2.fc30.x86_64/libdnf/utils/CompressedFile.cpp:24.
The updates-modular repository was being processed in #8 in libdnf::ModulePackageContainer::add at /usr/src/debug/libdnf-0.35.1-2.fc30.x86_64/libdnf/module/ModulePackageContainer.cpp:282
modules_fn = "/var/cache/dnf/updates-modular-783da5de2e38c644/repodata/d44839b85f28c11f66075f38f94c55f557b34723bf0a91f9687cf66d4ede9061-modules.yaml.gz"
I'll attach the trace using coredumpctl gdb using bt then
thread apply all bt full
Version-Release number of selected component (if applicable):
libdnf-0:0.35.1-2.fc30.x86_64 which is a scratch build described at
This crash has happened once in about 50 times that dnf makecache --timer was run by dnf-makecache.service with libdnf-0:0.35.1-2.fc30.x86_64
Steps to Reproduce:
1. Boot F30 KDE Plasma spin fully updated with updates-testing enabled
2. Log in to Plasma on Wayland from sddm
3. Install dnf-0:4.2.7-1.fc30.noarch and libdnf-0:0.35.1-2.fc30.x86_64 from koji
4. Run dnf-makecache.service or dnf makecache --timer at least 50 times
5. sudo coredumpctl gdb
dnf makecache --timer run by dnf-makecache.service aborted with libdnf::File::NotOpenedException
No dnf crash
The report at https://bugzilla.redhat.com/show_bug.cgi?id=1693476 had a dnf update crash in F29 involving libdnf::File::NotOpenedException with a similar trace. The underlying problem might've been the same as in that report.
I tried to submit this crash with abrt, but abrt said there was insufficient information in the trace to submit it to bugzilla. The FAF link for this crash is https://retrace.fedoraproject.org/faf/reports/2641963/
Thank you very much for your report. Please are you able to reproduce the issue with the latest dnf, libdnf, and librepo?
(In reply to Jaroslav Mracek from comment #1)
> Thank you very much for your report. Please are you able to reproduce the
> issue with the latest dnf, libdnf, and librepo?
Jaroslav, I saw this crash only once. I've had the F31 modular repos disabled in recent months since I don't have any modules installed. I enabled F31 fedora-modular, updates-modular, updates-testing-modular to try to reproduce this crash. I didn't see the crash with dnf makecache --timer running from dnf-makecache.service running in the background several times and sudo dnf makecache --refresh and sudo dnf upgrade --refresh from the command line about 10 times. The versions I used were dnf-4.2.17-1.fc31.noarch, libdnf-0.39.1-1.fc31.x86_64, librepo-1.11.1-1.fc31.x86_64. The FAF entry that was created when I tried to submit this crash using abrt has 6 reports from others with the last from 2019-09-21 https://retrace.fedoraproject.org/faf/reports/2641963/
The crash might be fixed, but it's difficult to be sure due to its low frequency. I'll comment here if I see it again. Thanks.
From https://retrace.fedoraproject.org/faf/reports/2641963/ I can suggest that the issue was resolved, because it was reported 6 times in the short period and then no additional reports. Without additional data or reproducer we cannot do much. Please anyone with reproducer feel free to reopen the bug report. Thanks for the bug report, but at the present time we cannot resolve the issue.