Bug 2224466

Summary: anacondas encounters "malloc(): invalid size (unsorted)" before launching GUI installer
Product: Red Hat Enterprise Linux 9 Reporter: Yihuang Yu <yihyu>
Component: dnfAssignee: Marek Blaha <mblaha>
Status: CLOSED MIGRATED QA Contact: swm-qe
Severity: medium Docs Contact:
Priority: unspecified    
Version: 9.3CC: james.antill, jkonecny, jstodola, rvykydal, vslavik
Target Milestone: rcKeywords: MigratedToJIRA
Target Release: ---Flags: pm-rhel: mirror+
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-09-21 17:45:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Yihuang Yu 2023-07-21 02:59:37 UTC
Description of problem:
During the past few rounds of KVM guest installation, we found the anaconda pane dead. When this problem is triggered, I can see some logs related to malloc from the console log.
x86: malloc(): unsorted double linked list corrupted
aarch64: malloc(): invalid size (unsorted)

The ks using GUI mode
[stdlog] 2023-07-19 05:09:07,271 avocado.virttest.tests.unattended_install DEBUG| Unattended install contents:
[stdlog] 2023-07-19 05:09:07,271 avocado.virttest.tests.unattended_install DEBUG| cdrom
[stdlog] 2023-07-19 05:09:07,272 avocado.virttest.tests.unattended_install DEBUG| graphical
...
...

The problem can be triggered at anaconda-34.25.3.6-1.el9 and anaconda-34.25.3.7-1.el9, I suppose the new feature kernel switcher introduces this issue, of course, just a guess.

Version-Release number of selected component (if applicable):
RHEL-9.3.0-20230719.0-aarch64-dvd1.iso (anaconda 34.25.3.7-1.el9)

How reproducible:
6/100

Steps to Reproduce:
1. Start a qemu process for guest installation with "MALLOC_PERTURB_=1", example qemu command:
MALLOC_PERTURB_=1  /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -blockdev '{"node-name": "file_aavmf_code", "driver": "file", "filename": "/usr/share/edk2/aarch64/QEMU_EFI-silent-pflash.qcow2", "auto-read-only": true, "discard": "unmap"}' \
    -blockdev '{"node-name": "drive_aavmf_code", "driver": "qcow2", "read-only": true, "file": "file_aavmf_code"}' \
    -blockdev '{"node-name": "file_aavmf_vars", "driver": "file", "filename": "/root/avocado/data/avocado-vt/avocado-vt-vm1_rhel930-aarch64-64k-virtio-scsi_qcow2_filesystem_VARS.qcow2", "auto-read-only": true, "discard": "unmap"}' \
    -blockdev '{"node-name": "drive_aavmf_vars", "driver": "qcow2", "read-only": false, "file": "file_aavmf_vars"}' \
    -machine virt,gic-version=host,memory-backend=mem-machine_mem,pflash0=drive_aavmf_code,pflash1=drive_aavmf_vars \
    -device '{"id": "pcie-root-port-0", "driver": "pcie-root-port", "multifunction": true, "bus": "pcie.0", "addr": "0x1", "chassis": 1}' \
    -device '{"id": "pcie-pci-bridge-0", "driver": "pcie-pci-bridge", "addr": "0x0", "bus": "pcie-root-port-0"}'  \
    -nodefaults \
    -device '{"id": "pcie-root-port-1", "port": 1, "driver": "pcie-root-port", "addr": "0x1.0x1", "bus": "pcie.0", "chassis": 2}' \
    -device '{"driver": "virtio-gpu-pci", "bus": "pcie-root-port-1", "addr": "0x0"}' \
    -m 8192 \
    -object '{"size": 8589934592, "id": "mem-machine_mem", "qom-type": "memory-backend-ram"}'  \
    -smp 4,maxcpus=4,cores=2,threads=1,clusters=1,sockets=2  \
    -cpu 'host' \
    -serial unix:'/var/tmp/avocado_60klyq8d/serial-serial0-20230719-050907-v6BwFrIg',server=on,wait=off \
    -device '{"id": "pcie-root-port-2", "port": 2, "driver": "pcie-root-port", "addr": "0x1.0x2", "bus": "pcie.0", "chassis": 3}' \
    -device '{"driver": "qemu-xhci", "id": "usb1", "bus": "pcie-root-port-2", "addr": "0x0"}' \
    -device '{"driver": "usb-tablet", "id": "usb-tablet1", "bus": "usb1.0", "port": "1"}' \
    -device '{"id": "pcie-root-port-3", "port": 3, "driver": "pcie-root-port", "addr": "0x1.0x3", "bus": "pcie.0", "chassis": 4}' \
    -device '{"id": "virtio_scsi_pci0", "driver": "virtio-scsi-pci", "bus": "pcie-root-port-3", "addr": "0x0"}' \
    -blockdev '{"node-name": "file_image1", "driver": "file", "auto-read-only": true, "discard": "unmap", "aio": "threads", "filename": "/home/kvm_autotest_root/images/rhel930-aarch64-64k-virtio-scsi.qcow2", "cache": {"direct": true, "no-flush": false}}' \
    -blockdev '{"node-name": "drive_image1", "driver": "qcow2", "read-only": false, "cache": {"direct": true, "no-flush": false}, "file": "file_image1"}' \
    -device '{"driver": "scsi-hd", "id": "image1", "drive": "drive_image1", "write-cache": "on"}' \
    -device '{"id": "pcie-root-port-4", "port": 4, "driver": "pcie-root-port", "addr": "0x1.0x4", "bus": "pcie.0", "chassis": 5}' \
    -device '{"driver": "virtio-net-pci", "mac": "9a:4b:a3:a2:84:ef", "rombar": 0, "id": "idlGmTu3", "netdev": "idm5R4TI", "bus": "pcie-root-port-4", "addr": "0x0"}'  \
    -netdev tap,id=idm5R4TI,vhost=on,vhostfd=16,fd=10 \
    -blockdev '{"node-name": "file_cd1", "driver": "file", "auto-read-only": true, "discard": "unmap", "aio": "threads", "filename": "/home/kvm_autotest_root/iso/linux/RHEL-9.3.0-20230719.0-aarch64-dvd1.iso", "cache": {"direct": true, "no-flush": false}}' \
    -blockdev '{"node-name": "drive_cd1", "driver": "raw", "read-only": true, "cache": {"direct": true, "no-flush": false}, "file": "file_cd1"}' \
    -device '{"driver": "scsi-cd", "id": "cd1", "drive": "drive_cd1", "write-cache": "on"}' \
    -blockdev '{"node-name": "file_unattended", "driver": "file", "auto-read-only": true, "discard": "unmap", "aio": "threads", "filename": "/home/kvm_autotest_root/images/rhel930-aarch64/ks.iso", "cache": {"direct": true, "no-flush": false}}' \
    -blockdev '{"node-name": "drive_unattended", "driver": "raw", "read-only": true, "cache": {"direct": true, "no-flush": false}, "file": "file_unattended"}' \
    -device '{"driver": "scsi-cd", "id": "unattended", "drive": "drive_unattended", "write-cache": "on"}'  \
    -kernel '/home/kvm_autotest_root/images/rhel930-aarch64/vmlinuz'  \
    -append 'inst.sshd inst.repo=cdrom inst.ks=cdrom:/ks.cfg net.ifnames=0 console=ttyAMA0,38400'  \
    -initrd '/home/kvm_autotest_root/images/rhel930-aarch64/initrd.img'  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -no-shutdown \
    -enable-kvm \

2. Check the console log

Actual results:
2023-07-19 05:09:38: [anaconda]1:main* 2:shell  3:log  4:sto><'echo -n "Switch tab: Alt+Tab | Help: F[m(B[?25h[1;1H[?25l[7m[24;1H[anaconda]1:main* 2:shell  3:log  4:storage-log >Switch tab: Alt+Tab | Help: F1 [m(B[?25h[1;1HStarting installer, one moment...
2023-07-19 05:09:39: [?7727h
2023-07-19 05:09:39: anaconda 34.25.3.7-1.el9 for Red Hat Enterprise Linux 9.3 (pre-release) started.[3;1H * installation log files are stored in /tmp during the installation
2023-07-19 05:09:39:  * shell is available on TTY2
2023-07-19 05:09:39:  * when reporting a bug add logs from /tmp as separate text/plain attachments
2023-07-19 05:09:49: Queued start job for default target Main User Target.
2023-07-19 05:09:49: Startup finished in 188ms.
2023-07-19 05:09:53: malloc(): invalid size (unsorted)
2023-07-19 05:09:53: Received SIGHUP.
2023-07-19 05:09:53: Reloading.
2023-07-19 05:09:53: [1;23r[23;1H
2023-07-19 05:09:53: Pane is dead (signal[C6, Wed Jul 19 09:09:53 2023)[K[1;24r[23;50H[?25l[Hanaconda 34.25.3.7-1.el9 for Red Hat Enterprise Linux 9.3 (pre-release) started.[2;1H * installation log files are stored in /tmp during the installation[K
2023-07-19 05:09:53:  * shell is available on TTY2[K
2023-07-19 05:09:53:  * when reporting a bug add logs from /tmp as separate text/plain attachments[K
2023-07-19 05:09:53: malloc(): invalid size (unsorted)[K

Expected results:
The installation can be completed

Additional info:
Besides the malloc problem, also some other errors can be found in different installation process.

1. 
2023-07-19 05:18:17: Anaconda received signal 11!.
2023-07-19 05:18:17: /usr/lib64/python3.9/site-packages/pyanaconda/_isys.so(+0x10b8)[0xffffa87ba0b8]
2023-07-19 05:18:17: linux-vdso.so.1(__kernel_rt_sigreturn+0x0)[0xffffbb3397e0]
2023-07-19 05:18:17: /lib64/libsolv.so.1(+0x39ec0)[0xffffa5c82ec0]
2023-07-19 05:18:17: /lib64/libsolv.so.1(repodata_set_void+0x60)[0xffffa5c85730]
2023-07-19 05:18:17: /lib64/libsolv.so.1(repodata_set_sourcepkg+0x30c)[0xffffa5c87e8c]
2023-07-19 05:18:17: /lib64/libsolvext.so.1(+0x2a284)[0xffffa5c21284]
2023-07-19 05:18:17: /lib64/libxml2.so.2(+0x1220bc)[0xffffabd5c0bc]
2023-07-19 05:18:17: [1;23r[23;1H
2023-07-19 05:18:17: 
2023-07-19 05:18:17: 
2023-07-19 05:18:17: 
2023-07-19 05:18:17: [10;1H/lib64/libxml2.so.2(+0x12b80c)[0xffffabd6580c]
2023-07-19 05:18:17: /lib64/libxml2.so.2(xmlParseChunk+0x214)[0xffffabc869d4]
2023-07-19 05:18:17: /lib64/libsolvext.so.1(+0x2e430)[0xffffa5c25430]
2023-07-19 05:18:17: /lib64/libsolvext.so.1(repo_add_rpmmd+0xe4)[0xffffa5c1b3d4]
2023-07-19 05:18:17: /lib64/libdnf.so.2(dnf_sack_load_repo+0x1cc)[0xffffa5d882b0]
2023-07-19 05:18:17: /usr/lib64/python3.9/site-packages/hawkey/_hawkey.so(+0x25e9c)[0xffffa54f3e9c]
2023-07-19 05:18:17: /lib64/libpython3.9.so.1.0(+0xd4820)[0xffffbaed4820]
2023-07-19 05:18:17: /lib64/libpython3.9.so.1.0(_PyObject_Call+0x80)[0xffffbaed3220]
2023-07-19 05:18:17: /lib64/libpython3.9.so.1.0(_PyEval_EvalFrameDefault+0x5454)[0xffffbaebf0b4]
2023-07-19 05:18:17: /lib64/libpython3.9.so.1.0(+0xc8c30)[0xffffbaec8c30]
2023-07-19 05:18:17: /lib64/libpython3.9.so.1.0(_PyEval_EvalFrameDefault+0x77c)[0xffffbaeba3dc][K
2023-07-19 05:18:17: /lib64/libpython3.9.so.1.0(+0xb8c70)[0xffffbaeb8c70][K
2023-07-19 05:18:17: /lib64/libpython3.9.so.1.0(_PyFunction_Vectorcall+0x168)[0xffffbaec89c8][K
2023-07-19 05:18:17: [K[1;24r[23;1H
2023-07-19 05:18:17: [New LWP 2324][1;23r[23;1H
2023-07-19 05:18:17: [K[1;24r[23;1H[New LWP 2325][1;23r[23;1H
2023-07-19 05:18:17: [K[1;24r[23;1H[New LWP 2521][1;23r[23;1H
2023-07-19 05:18:17: [K[1;24r[23;1H[1;23r[23;1H
2023-07-19 05:18:17: [A[New LWP 2601]
2023-07-19 05:18:17: [K[1;24r[23;1H[New LWP 2694][1;23r[23;1H
2023-07-19 05:18:17: [K[1;24r[23;1H[New LWP 2695][1;23r[23;1H
2023-07-19 05:18:17: [K[1;24r[23;1H[New LWP 2702][1;23r[23;1H
2023-07-19 05:18:17: [K[1;24r[23;1H[New LWP 2704][1;23r[23;1H
2023-07-19 05:18:17: [K[1;24r[23;1H[New LWP 2707][1;23r[23;1H
2023-07-19 05:18:17: [K[1;24r[23;1H[New LWP 2708][1;23r[23;1H
2023-07-19 05:18:17: [K[1;24r[23;1H
2023-07-19 05:18:19: [1;23r[23;1H
2023-07-19 05:18:19: 
2023-07-19 05:18:19: [2A[Thread debugging using libthread_db enabled]
2023-07-19 05:18:19: Using host libthread_db library "/lib64/libthread_db.so.1".[K
2023-07-19 05:18:19: [K[1;24r[23;1H[1;23r[23;1H
2023-07-19 05:18:19: 
2023-07-19 05:18:19: [2A0x0000ffffbacd112c in __futex_abstimed_wait_cancelable64 () from /lib64/libc.so.6[K
2023-07-19 05:18:19: [K[1;24r[23;1H
2023-07-19 05:18:21: [1;23r[23;1H
2023-07-19 05:18:21: [ASaved corefile /tmp/anaconda.core.2296
2023-07-19 05:18:21: [K[1;24r[23;1H[1;23r[23;1H
2023-07-19 05:18:21: [AError waiting on gcore: Interrupted system call
2023-07-19 05:18:21: [K[1;24r[23;1H[1;23r[23;1H
2023-07-19 05:18:21: [A[Inferior 1 (process 2296) detached]
2023-07-19 05:18:21: [K[1;24r[23;1HReceived SIGHUP.
2023-07-19 05:18:21: Reloading.
2023-07-19 05:18:21: [1;23r[23;1H
2023-07-19 05:18:21: Pane is dead (status[C1, Wed Jul 19 09:18:21 2023)[K[1;24r[23;50H[?25l[H/lib64/libpython3.9.so.1.0(+0xc8c30)[0xffffbaec8c30][K
2023-07-19 05:18:21: /lib64/libpython3.9.so.1.0(_PyEval_EvalFrameDefault+0x77c)[0xffffbaeba3dc][K
2023-07-19 05:18:21: /lib64/libpython3.9.so.1.0(+0xb8c70)[0xffffbaeb8c70][K
2023-07-19 05:18:21: /lib64/libpython3.9.so.1.0(_PyFunction_Vectorcall+0x168)[0xffffbaec89c8][K
2023-07-19 05:18:21: [New LWP 2324][K
2023-07-19 05:18:21: [New LWP 2325][K
2023-07-19 05:18:21: [New LWP 2521][K
2023-07-19 05:18:21: [New LWP 2601][K
2023-07-19 05:18:21: [New LWP 2694][K
2023-07-19 05:18:21: [New LWP 2695][K
2023-07-19 05:18:21: [New LWP 2702][K
2023-07-19 05:18:21: [New LWP 2704][K
2023-07-19 05:18:21: [New LWP 2707][K
2023-07-19 05:18:21: [New LWP 2708][K
2023-07-19 05:18:21: [Thread debugging using libthread_db enabled][K
2023-07-19 05:18:21: Using host libthread_db library "/lib64/libthread_db.so.1".[K
2023-07-19 05:18:21: 0x0000ffffbacd112c in __futex_abstimed_wait_cancelable64 () from /lib64/libc.so.6[K
2023-07-19 05:18:21: Saved corefile /tmp/anaconda.core.2296[K
2023-07-19 05:18:21: Error waiting on gcore: Interrupted system call[K

2. 
2023-07-17 07:25:49: python3: /builddir/build/BUILD/libdnf-0.69.0/libdnf/sack/query.cpp:2316: void libdnf::Query::Impl::apply(): Assertion `m.size == result->getMap()->size' failed.[8;1H
2023-07-17 07:25:49: Received SIGHUP.
2023-07-17 07:25:49: Reloading.

3.
2023-07-18 00:00:18: (anaconda:2054): Gtk-[1mWARNING[m(B **: 04:00:18.933: Could not load a pixbuf from icon theme.
2023-07-18 00:00:18: This may indicate that pixbuf loaders or the mime database could not be found.
2023-07-18 00:00:19: **
2023-07-18 00:00:19: Gtk:ERROR:../gtk/gtkiconhelper.c:494:ensure_surface_for_gicon: assertion failed (error == NULL): Failed to load /usr/share/icons/Adwaita/16x16/status/image-missing.png: Error reading from file: Input/output error (g-io-error-quark, 0)
2023-07-18 00:00:19: Bail out! Gtk:ERROR:../gtk/gtkiconhelper.c:494:ensure_surface_for_gicon: assertion failed (error == NULL): Failed to load /usr/share/icons/Adwaita/16x16/status/image-missing.png: Error reading from file: Input/output error (g-io-error-quark, 0)
2023-07-18 00:00:19: Received SIGHUP.
2023-07-18 00:00:19: Reloading.
2023-07-18 00:00:19: [1;23r[23;1H
2023-07-18 00:00:19: Pane is dead (signal[C6, Tue Jul 18 04:00:19 2023)[K[1;24r[23;50H[?25l[Hanaconda 34.25.3.7-1.el9 for Red Hat Enterprise Linux 9.3 (pre-release) started.[2;1H * installation log files are stored in /tmp during the installation[K
2023-07-18 00:00:19:  * shell is available on TTY2[K
2023-07-18 00:00:19:  * when reporting a bug add logs from /tmp as separate text/plain attachments[K
2023-07-18 00:00:19: [K
2023-07-18 00:00:19: (anaconda:2054): Gtk-[1mWARNING[m(B **: 04:00:18.933: Could not load a pixbuf from icon theme.[K
2023-07-18 00:00:19: This may indicate that pixbuf loaders or the mime database could not be found.[K
2023-07-18 00:00:19: **[K
2023-07-18 00:00:19: Gtk:ERROR:../gtk/gtkiconhelper.c:494:ensure_surface_for_gicon: assertion failed (error == NULL): Failed to load /usr/share/icons/Adwaita/16x16/status/image-missing.png: Error reading from file: Input/output error (g-io-error-quark, 0)[K
2023-07-18 00:00:19: Bail out! Gtk:ERROR:../gtk/gtkiconhelper.c:494:ensure_surface_for_gicon: assertion failed (error == NULL): Failed to load /usr/share/icons/Adwaita/16x16/status/image-missing.png: Error reading from file: Input/output error (g-io-error-quark, 0)[K

Comment 2 Jan Stodola 2023-07-21 07:25:53 UTC
Hello Yihuang,
since you already append "inst.sshd" on the kernel cmdline, could you please login to the system when the problem appears and get the corefile? (/tmp/anaconda.core.*). In case the installer turns off/reboots the system after the problem occurs, try appending "inst.nokill" on the kernel cmdline.
Also, please get /tmp/syslog, which is not available in the logs in comment 1.

Comment 6 Vladimír Slávik 2023-08-14 09:22:40 UTC
Hello, the isys part is misleading, in multiple ways. We handle core dumps via isys, so that says nothing. It's a synchronous hook for signals, you can see in the first comment traceback that on top you have isys which does the dump, then kernel which called it, then libsolv... except kernel delivers signals semi-randomly, mostly to main thread, so the rest of the stack could be related, or not.

Looking at the other error with GTK not finding an icon... no idea. We had to replace some icons in the past, perhaps these GNOME changes trickled into RHEL only now?

Another thing I note is that you have a KVM guest doing MALLOC_PERTURB_=1 and inside we run with MALLOC_PERTURB_=204 in our systemd units. No idea how this blends.

If I were to debug this, I'd check:
- I see no actual mention if running with MALLOC_PERTURB_ succeeded before?
- Does running qemu-kvm with MALLOC_PERTURB_=204 instead of 1 hide the error? I'd expect that, but if not, then that's a question for either glibc or kvm.
- Does the icon error happen also without using MALLOC_PERTURB_? I feel like it should be unrelated.

Comment 7 Yihuang Yu 2023-08-14 11:27:32 UTC
A new error is: 'python3: /builddir/build/BUILD/libdnf-0.69.0/libdnf/sack/query.cpp:2316: void libdnf::Query::Impl::apply(): Assertion `m.size == result->getMap()->size' failed'

I will test all scenarios mentioned in comment 6 and summarise the result asap.

Comment 8 Yihuang Yu 2023-08-16 07:42:42 UTC
(In reply to Vladimír Slávik from comment #6)
> Hello, the isys part is misleading, in multiple ways. We handle core dumps
> via isys, so that says nothing. It's a synchronous hook for signals, you can
> see in the first comment traceback that on top you have isys which does the
> dump, then kernel which called it, then libsolv... except kernel delivers
> signals semi-randomly, mostly to main thread, so the rest of the stack could
> be related, or not.
> 
> Looking at the other error with GTK not finding an icon... no idea. We had
> to replace some icons in the past, perhaps these GNOME changes trickled into
> RHEL only now?
> 
> Another thing I note is that you have a KVM guest doing MALLOC_PERTURB_=1
> and inside we run with MALLOC_PERTURB_=204 in our systemd units. No idea how
> this blends.
> 
> If I were to debug this, I'd check:
> - I see no actual mention if running with MALLOC_PERTURB_ succeeded before?
Our auto test cases always add MALLOC_PERTURB_=1, so the answer is yes, with MALLOC_PERTURB_ passed before.

> - Does running qemu-kvm with MALLOC_PERTURB_=204 instead of 1 hide the
> error? I'd expect that, but if not, then that's a question for either glibc
> or kvm.
I tried MALLOC_PERTURB_=204, which can also trigger this problem.

> - Does the icon error happen also without using MALLOC_PERTURB_? I feel like
> it should be unrelated.
In the latest 100 times test, I cannot find icon error without using MALLOC_PERTURB_ and using RHEL-9.3.0-20230814.52-aarch64-dvd1.iso but triggered other errors, there fail ratio is 10/100, but I agree with you, MALLOC_PERTURB_ is unrelated.

Comment 9 Jan Stodola 2023-08-17 09:12:34 UTC
(In reply to Yihuang Yu from comment #7)
> A new error is: 'python3:
> /builddir/build/BUILD/libdnf-0.69.0/libdnf/sack/query.cpp:2316: void
> libdnf::Query::Impl::apply(): Assertion `m.size == result->getMap()->size'
> failed'

Is this an error that happened in the installer? I'm confused by the path "/builddir/build/BUILD/libdnf-0.69.0/..." which doesn't look like something that exists in the installer.

Comment 10 Yihuang Yu 2023-08-17 09:35:01 UTC
(In reply to Jan Stodola from comment #9)
> (In reply to Yihuang Yu from comment #7)
> > A new error is: 'python3:
> > /builddir/build/BUILD/libdnf-0.69.0/libdnf/sack/query.cpp:2316: void
> > libdnf::Query::Impl::apply(): Assertion `m.size == result->getMap()->size'
> > failed'
> 
> Is this an error that happened in the installer? I'm confused by the path
> "/builddir/build/BUILD/libdnf-0.69.0/..." which doesn't look like something
> that exists in the installer.

Yes, it's from the installer. The full context is:

2023-08-14 23:28:13:  * shell is available on TTY2
2023-08-14 23:28:13:  * when reporting a bug add logs from /tmp as separate text/plain attachments
2023-08-14 23:28:13: python3: /builddir/build/BUILD/libdnf-0.69.0/libdnf/sack/query.cpp:2316: void libdnf::Query::Impl::apply(): Assertion `m.size == result->getMap()->size' failed.
2023-08-14 23:28:09: Startup finished in 190ms.

2023-08-14 23:28:13:
2023-08-14 23:28:13:
2023-08-14 23:28:13:
2023-08-14 23:28:13:
2023-08-14 23:28:13:
2023-08-14 23:28:13:
2023-08-14 23:28:13:
2023-08-14 23:28:13:
2023-08-14 23:28:13:
2023-08-14 23:28:13:
2023-08-14 23:28:13:
2023-08-14 23:28:13:
2023-08-14 23:28:13:
2023-08-14 23:28:13:
2023-08-14 23:28:13:
2023-08-14 23:28:13: Pane is dead (signal 6, Tue Aug 15 03:28:13 2023)

It seems that I can see some errors related to libdnf in several error situations. For example, another one is:
2023-08-14 22:59:32: Anaconda received signal 11!.
2023-08-14 22:59:32: /usr/lib64/python3.9/site-packages/pyanaconda/_isys.so(+0x10b8)[0xffff9ffed0b8]
2023-08-14 22:59:32: linux-vdso.so.1(__kernel_rt_sigreturn+0x0)[0xffffb2b727e0]
2023-08-14 22:59:32: /lib64/libc.so.6(+0x2871c)[0xffffb29af71c]
2023-08-14 22:59:32: /lib64/libsolv.so.1(repodata_add_dirstr+0x84)[0xffff9d4c8138]
2023-08-14 22:59:32: /lib64/libsolvext.so.1(+0x2a284)[0xffff9d463284]
/lib64/libxml2.so.2(xmlParseChunk+0x214)[0xffffa34c89d4]ffa359e0bc]
2023-08-14 22:59:32: /lib64/libsolvext.so.1(+0x2e430)[0xffff9d467430]
2023-08-14 22:59:32: /lib64/libsolvext.so.1(repo_add_rpmmd+0xe4)[0xffff9d45d3d4]
2023-08-14 22:59:32: /lib64/libdnf.so.2(dnf_sack_load_repo+0x1cc)[0xffff9d5ca2b0]
2023-08-14 22:59:32: /usr/lib64/python3.9/site-packages/hawkey/_hawkey.so(+0x25e9c)[0xffff9cd25e9c]
2023-08-14 22:59:32: /lib64/libpython3.9.so.1.0(+0xd4860)[0xffffb26d4860]
2023-08-14 22:59:32: /lib64/libpython3.9.so.1.0(_PyObject_Call+0x80)[0xffffb26d3260]
2023-08-14 22:59:32: /lib64/libpython3.9.so.1.0(_PyEval_EvalFrameDefault+0x5234)[0xffffb26beed4]
2023-08-14 22:59:32: /lib64/libpython3.9.so.1.0(+0xc8c70)[0xffffb26c8c70]
2023-08-14 22:59:32: /lib64/libpython3.9.so.1.0(_PyEval_EvalFrameDefault+0x77c)[0xffffb26ba41c]
2023-08-14 22:59:32: /lib64/libpython3.9.so.1.0(+0xb8cb0)[0xffffb26b8cb0]
/lib64/libpython3.9.so.1.0(_PyEval_EvalFrameDefault+0x77c)[0xffffb26ba41c]
2023-08-14 22:59:36: /lib64/libpython3.9.so.1.0(+0xb8cb0)[0xffffb26b8cb0]
2023-08-14 22:59:36: /lib64/libpython3.9.so.1.0(_PyFunction_Vectorcall+0x168)[0xffffb26c8a08]
2023-08-14 22:59:36: /lib64/libpython3.9.so.1.0(+0xd2e20)[0xffffb26d2e20]
2023-08-14 22:59:36: [New LWP 2888]
2023-08-14 22:59:36: [New LWP 2889]
2023-08-14 22:59:36: [New LWP 3101]
2023-08-14 22:59:36: [New LWP 3287]
2023-08-14 22:59:36: [New LWP 3381]
2023-08-14 22:59:36: [New LWP 3383]
2023-08-14 22:59:36: [New LWP 3384]
2023-08-14 22:59:36: [New LWP 3385]
2023-08-14 22:59:36: [New LWP 3392]
2023-08-14 22:59:36: [New LWP 3393]
2023-08-14 22:59:36: [New LWP 3397]
2023-08-14 22:59:36: [New LWP 3398]
2023-08-14 22:59:36: [Thread debugging using libthread_db enabled]
2023-08-14 22:59:36: Using host libthread_db library "/lib64/libthread_db.so.1".
2023-08-14 22:59:36: 0x0000ffffb2a667b8 in poll () from /lib64/libc.so.6
2023-08-14 22:59:36: Saved corefile /tmp/anaconda.core.2860
2023-08-14 22:59:36: [Inferior 1 (process 2860) detached]
2023-08-14 22:59:36:
2023-08-14 22:59:37: Pane is dead (status 1, Tue Aug 15 02:59:36 2023)

Comment 11 Jiri Konecny 2023-08-31 09:32:08 UTC
Switching to DNF component because this coredump is raised from libdnf.

Comment 12 Marek Blaha 2023-09-14 07:51:57 UTC
Would it be possible to attach the complete coredump here? Based on the available information, it appears to be a problem related to parsing XML repository metadata (though it's just a long-shot guess). Is it also possible to obtain the metadata of available repositories? Additionally, information about the versions of the components used, such as dnf, libdnf, and libsolv, might be helpful.

Comment 15 Jan Stodola 2023-09-15 07:40:46 UTC
BTW, it seems there is another report from CentOS Stream 9/x86_64, very likely hitting the same problem. See https://issues.redhat.com/browse/RHEL-2997.

Comment 16 Yihuang Yu 2023-09-15 08:48:20 UTC
(In reply to Jan Stodola from comment #15)
> BTW, it seems there is another report from CentOS Stream 9/x86_64, very
> likely hitting the same problem. See
> https://issues.redhat.com/browse/RHEL-2997.

Oh yes! Thanks Jan for the info. I checked my ks file, the ks file also has a comment in the package list.

Comment 17 RHEL Program Management 2023-09-21 17:32:41 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 18 RHEL Program Management 2023-09-21 17:45:56 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.