Description of problem: I think it was OOM'd. Version-Release number of selected component: gdb-headless-16.3-1.fc42 Additional info: reporter: libreport-2.17.15 type: CCpp reason: gdb killed by SIGABRT journald_cursor: s=c1f108f1ef4e48808a377c0a73f168fc;i=86533f;b=9a2a7e1fe91e4e5bb9e946b734c27a2b;m=22af9f15a;t=6396e05149fe3;x=231d4b614330ca59 executable: /usr/libexec/gdb cmdline: /usr/libexec/gdb -batch -ex $'set debuginfod enabled on' "" -ex $'file /usr/bin/krename' -ex $'core-file ./coredump' -ex $'thread apply all -ascending backtrace full 1024' -ex $'info sharedlib' -ex $'print (char*)__abort_msg' -ex $'print (char*)__glib_assert_msg' -ex $'info all-registers' -ex disassemble cgroup: 0::/user.slice/user-1000.slice/user/app.slice/app-org.freedesktop.GnomeAbrt rootdir: / uid: 1000 kernel: 6.15.4-200.fc42.x86_64 package: gdb-headless-16.3-1.fc42 runlevel: N 5 backtrace_rating: 4 crash_function: handle_fatal_signal comment: I think it was OOM'd. Truncated backtrace: Thread no. 1 (33 frames) #3 handle_fatal_signal at ../../gdb/event-top.c:1039 #5 __syscall_cancel_arch at ../sysdeps/unix/sysv/linux/x86_64/syscall_cancel.S:56 #6 __internal_syscall_cancel at cancellation.c:49 #7 __syscall_cancel at cancellation.c:75 #8 __poll at ../sysdeps/unix/sysv/linux/poll.c:29 #9 poll at /usr/include/bits/poll2.h:44 #10 Curl_poll at ../../lib/select.c:313 #11 multi_wait.part.0.lto_priv.0 at ../../lib/multi.c:1366 #12 multi_wait at ../../lib/multi.c:1498 #13 curl_multi_wait at ../../lib/multi.c:1499 #14 perform_queries at /usr/src/debug/elfutils-0.193-2.fc42.x86_64/debuginfod/debuginfod-client.c:1065 #15 debuginfod_query_server_by_buildid at /usr/src/debug/elfutils-0.193-2.fc42.x86_64/debuginfod/debuginfod-client.c:2217 #16 debuginfod_debuginfo_query at ../../gdb/debuginfod-support.c:379 #17 debuginfod_find_and_open_separate_symbol_file at ../../gdb/symfile-debug.c:557 #18 objfile::find_and_add_separate_symbol_file at ../../gdb/symfile-debug.c:614 #19 elf_symfile_read_dwarf2 at ../../gdb/elfread.c:1218 #20 elf_symfile_read at ../../gdb/elfread.c:1438 #21 read_symbols at ../../gdb/symfile.c:763 #22 syms_from_objfile_1 at ../../gdb/symfile.c:962 #23 syms_from_objfile at ../../gdb/symfile.c:979 #24 symbol_file_add_with_addrs at ../../gdb/symfile.c:1084 #25 symbol_file_add_from_bfd at ../../gdb/symfile.c:1158 #26 solib_read_symbols at ../../gdb/solib.c:651 #27 solib_add at ../../gdb/solib.c:980 #28 post_create_inferior at ../../gdb/infcmd.c:291 #29 core_target_open at ../../gdb/corelow.c:1154 #30 cmd_func at ../../gdb/cli/cli-decode.c:2748 #31 execute_command at ../../gdb/top.c:570 #32 catch_command_errors at ../../gdb/main.c:508 #33 execute_cmdargs at ../../gdb/main.c:607 #34 captured_main_1 at ../../gdb/main.c:1308 #35 captured_main at ../../gdb/main.c:1333 #36 gdb_main at ../../gdb/main.c:1362 Potential duplicate: bug 2352961
Created attachment 2096494 [details] File: proc_pid_status
Created attachment 2096495 [details] File: maps
Created attachment 2096496 [details] File: limits
Created attachment 2096497 [details] File: environ
Created attachment 2096498 [details] File: open_fds
Created attachment 2096499 [details] File: mountinfo
Created attachment 2096500 [details] File: os_info
Created attachment 2096501 [details] File: cpuinfo
Created attachment 2096502 [details] File: core_backtrace
Created attachment 2096503 [details] File: dso_list
Created attachment 2096504 [details] File: backtrace
This actually occurred twice, but I got a 409 CONFLICT during simultaneous submission.
Probably not a bug. Probably just a consequence of the OOM killer. NOTABUG?
Hard to say whether it was the OOM killer or a genuine bug. The backtrace shows that GDB was waiting for library debug symbols via debuginfod. We've seen other bug reports in this area, so it could be a legitimate bug. If you see it again, I'd be happy to take a look...
Doesn't the OOM killer use SIGKILL? While GDB here is terminating with SIGABRT. Or have I got that wrong?
(In reply to Andrew Burgess from comment #15) > Doesn't the OOM killer use SIGKILL? While GDB here is terminating with > SIGABRT. Or have I got that wrong? In retrospect, it does: see https://unix.stackexchange.com/questions/172559/receive-signal-before-process-is-being-killed-by-oom-killer-cgroups#comment426399_172559:~:text=From%20the%20Linux%20source%20code%2C%20it%20seems%20that%20it%20sends%20SIGKILL.
Reopening, because I just hit this again at the same time as a recurrence of https://bugzilla.redhat.com/show_bug.cgi?id=2354765#c12, alongside https://bugzilla.redhat.com/show_bug.cgi?id=2375067 (itself a recurrence of another).
Do you happen to have the core file from the crash of /usr/bin/krename ? To find it, look at the output from running "abrt-cli", then do "abrt-cli info ID" where ID is the hex identifier (left-most field) obtained from looking at the abrt-cli output. Output from the "abrt-cli info" command will show a path (most likely starting with /var/spool/abrt). Look in that directory - you should hopefully see a file named coredump.zst. Upload that file to this bug.
Created attachment 2097402 [details] `/var/spool/abrt/ccpp-2025-07-08-17:56:06.288601-73839/coredump.zst` (In reply to Kevin Buettner from comment #18) Yes, it remains: > ~~~ > RokeJulianLockhart@Beedell:~$ abrt-cli | grep krename > b09441b 1x krename 2025-07-08 17:56:06 > RokeJulianLockhart@Beedell:~$ abrt-cli info b09441b > Id b09441b > Component krename > Count 1 > Time 2025-07-08 17:56:06 > Command line krename > Package krename-1:5.0.2-10.fc42 > User id 1000 (RokeJulianLockhart) > Path /var/spool/abrt/ccpp-2025-07-08-17:56:06.288601-73839 > Reported to > ABRT Server https://retrace.fedoraproject.org/faf/reports/bthash/7f2edb71ad79f8e2c0d2286d35e78d86c4b16450 > Bugzilla https://bugzilla.redhat.com/show_bug.cgi?id=2378694 > ~~~ I've attached it. Per `coredumpctl debug`, would /var/lib/systemd/coredump/core.krename.1000.9a2a7e1fe91e4e5bb9e946b734c27a2b.73839.1751993765000000.zst be of use, too?
(In reply to Mr. Beedell, Roke Julian Lockhart (RJLB) from comment #19) > I've attached it. Per `coredumpctl debug`, would > /var/lib/systemd/coredump/core.krename.1000.9a2a7e1fe91e4e5bb9e946b734c27a2b. > 73839.1751993765000000.zst be of use, too? Thanks! FWIW, using the provide core file, I've tried and failed (so far) to reproduce the core dump that you're seeing. Regarding the other core file, it may well be of use, if you wouldn't mind uploading it...
Created attachment 2097570 [details] `/var/lib/systemd/coredump/core.krename.1000.9a2a7e1fe91e4e5bb9e946b734c27a2b.73839.1751993765000000.zst` Hopefully, this is of use.
Hi! So far we've been unable to reproduce the crash at our end. We're still trying things to see if we can reproduce the failure, but I wonder if the GDB crash is consistently reproducible at your end? From the original report the GDB command line that crashed was: /usr/libexec/gdb -batch -ex 'set debuginfod enabled on' "" -ex 'file /usr/bin/krename' -ex 'core-file ./coredump' -ex 'thread apply all -ascending backtrace full 1024' -ex 'info sharedlib' -ex 'print (char*)__abort_msg' -ex 'print (char*)__glib_assert_msg' -ex 'info all-registers' -ex 'disassemble' You'll need to update `./coredump` to point to a valid core file from `krename`. For example if you decompress `/var/lib/systemd/coredump/core.krename.1000.9a2a7e1fe91e4e5bb9e946b734c27a2b.73839.1751993765000000.zst` and use that it would be perfect. If you could give that a try it would be interesting to see if this crashes or not. If it does crash then capturing all the command line output and attaching it might be useful. But just knowing if the bug is consistent or not would be a help in itself I think.
Created attachment 2098009 [details] The Output Of Comment 22's `/usr/libexec/gdb` Command (In reply to Andrew Burgess from comment #22) GDB doesn't appear to crash, unless I should re-run the `/usr/libexec/gdb`-prefixed command inside `/usr/bin/gdb`. 🤷♂️ Perhaps, the cores of what's linked at https://bugzilla.redhat.com/show_bug.cgi?id=2378681#c17 might be of more use…?
I've been trying to reproduce this problem by using 'tc' (traffic control) to introduce various kinds of unreliability in the network connection. So far, though, I haven't been able to reproduce a GDB crash. If I make the network problems extreme enough, I can cause GDB's debuginfod client to time out, but, aside from being slow, it recovers quite nicely. In an extreme case with massive packet loss and latency, I don't think it loaded anything, but when it was done (not loading anything), it was perfectly happy to attempt to provide a backtrace. I'm wondering about your remark at the very beginning of the bug description, where you thought it was OOM'd... How much memory does the machine have? Was it slow / laggy at the time of the crash? Was it heavily loaded?
> How much memory does the machine have? I don't recall which of mine this occurred on, and don't see a hostname in any of the libreport-attached files. However, they both contain 32 GiB. > Was it slow / laggy at the time of the crash? All my machines solely ever become slow when they're close to OOM'ing, so I presume so. Before that point, it wouldn't have been. > Was it heavily loaded? I wouldn't be surprised, if it were generating some traces with Dr. Konqi and/or GNOME Abrt. However, I don't keep definitive records, and I don't generally attempt to reproduce bugs that cause an OOM whilst debugging others. It would very much explain the problem if I had this time, however.
I'm closing this for now. I've tried to reproduce it in a variety of ways, including introducing network unreliability, as well as a number of OOM scenarios using stress and also adjusting the memory allocated to my test VM. GDB does fail in a number of ways, but I haven't been able to make it crash. I *do* think that there's a problem - I just don't know how to reproduce it.
(In reply to Kevin Buettner from comment #26) > I *do* think that there's a problem - I just don't know how to reproduce it. I sincerely thank you for your effort. As a last-ditch effort (due to your very good timing), might the `SEGV`, that is linked to at https://bugzilla.redhat.com/show_bug.cgi?id=1575334#c4, be of any relevance? (Specifically, https://github.com/flatpak/flatpak/issues/6326#issuecomment-3319574831.) It's: > ~~~ > RokeJulianLockhart@Beedell:~$ rpm -qi $(rpm -qf $(command -v gdb)) > Name : gdb > Version : 16.3 > Release : 1.fc42 > Architecture: x86_64 > Install Date: Mon 09 Jun 2025 20:14:44 BST > Size : 466215 > Signature : RSA/SHA256, Wed 14 May 2025 10:32:15 BST, Key ID c8ac4916105ef944 > Source RPM : gdb-16.3-1.fc42.src.rpm > Build Date : Tue 13 May 2025 20:45:09 BST > Build Host : buildhw-x86-01.iad2.fedoraproject.org > Packager : Fedora Project > Vendor : Fedora Project > URL : https://gnu.org/software/gdb/ > Bug URL : https://bugz.fedoraproject.org/gdb > ~~~
(In reply to Mr. Beedell, Roke Julian Lockhart (RJLB) from comment #27) > (In reply to Kevin Buettner from comment #26) > > > I *do* think that there's a problem - I just don't know how to reproduce it. > > I sincerely thank you for your effort. As a last-ditch effort (due to your > very good timing), might the `SEGV`, that is linked to at > https://bugzilla.redhat.com/show_bug.cgi?id=1575334#c4, be of any relevance? > (Specifically, > https://github.com/flatpak/flatpak/issues/6326#issuecomment-3319574831.) When I compare the backtrace with what I remember of the one from this issue,... well, they're very different. This issue showed a potential debuginfod / libcurl problem where as the one linked above appears to have gotten a SIGSEGV when attempting to quit. So I doubt that these two matters are related. If you have a reproducer for the SIGSEGV-while-quitting problem, please file a bug against GDB...
(In reply to Kevin Buettner from comment #28) > If you have a reproducer for the SIGSEGV-while-quitting problem, please file > a bug against GDB... Upstream, or at RHBZ? (Apologies for the digression, but I've had no response elsewhere.)
I was just using webbrowser brave and watching a youtube video when the crash report happend. reporter: libreport-2.17.15 type: CCpp reason: gdb killed by SIGABRT journald_cursor: s=e8814c4943d646fbbed3112fddb89132;i=cef21;b=1ca9c5a61fb04a82a61cc4c7cb097b04;m=113cd74ca9;t=63fcc1394ed10;x=74538ae64fcbed23 executable: /usr/libexec/gdb cmdline: /usr/bin/gdb --nw --nx --batch $'--init-eval-command=set debuginfod enabled on' --command=/tmp/drkonqi.MpBvxL --command=/tmp/drkonqi.JNXIUC --core=/tmp/drkonqi-core.ZFLZLY/core /usr/bin/plasmashell cgroup: 0::/user.slice/user-1000.slice/user/app.slice/drkonqi-coredump-launcher rootdir: / uid: 1000 kernel: 6.16.7-200.fc42.x86_64 package: gdb-headless-16.3-1.fc42 runlevel: N 5 backtrace_rating: 4 crash_function: handle_fatal_signal comment: I was just using webbrowser brave and watching a youtube video when the crash report happend.
I've registered a NEEDSINFO for question at https://bugzilla.redhat.com/show_bug.cgi?id=2378681#c29, and whether to change the status due to the sudden corroboration at https://bugzilla.redhat.com/show_bug.cgi?id=2378681#c30.
(In reply to Mr. Beedell, Roke Julian Lockhart (RJLB) from comment #31) > I've registered a NEEDSINFO for question at > https://bugzilla.redhat.com/show_bug.cgi?id=2378681#c29, and whether to > change the status due to the sudden corroboration at > https://bugzilla.redhat.com/show_bug.cgi?id=2378681#c30. I've reopened it. I noticed that Mark in Comment 30 is using drkonqi which, I think, is a KDE crash handler. One of the things that I haven't tried yet is to use KDE with the various experiments that I tried. I'll give that a go... Kevin
Perhaps, per https://bugs.kde.org/show_bug.cgi?id=489315#c16, https://invent.kde.org/plasma/drkonqi/-/tree/3433c67f134c0f41cc000942d24dc2b169061a20/src/systemd/memorypressure.cpp#L147 may be relevant.
I'm closing this bug since I have no new ideas for things to do to try to reproduce it. GDB 17.1 has been released (and should be in testing now) for Fedora 43. In a week or so, a Fedora 42 release will follow. Therefore, I'm closing this bug in hopes that it's been fixed in GDB 17.1. If it comes up again there, I will take another look at trying to find a reproducer for whatever new bug(s) are filed.