Description of problem: remote-viewer crashes a bit more than once per day. When it happens, it's always when I'm moving my host mouse cursor across the remote-viewer window border. Version-Release number of selected component (if applicable): virt-viewer 8.0.0, libxcb 1.13.1, qemu 4.1.0, spice 0.14.2, and spice-gtk 0.37. no qemu guest agent. When this started a few months ago, I was on virt-viewer 8.0.0, libcxb 1.13 (not .1), qemu 4.0.0, spice 0.14.0, and spice-gtk 0.37. 8.0.0 was the first version I installed on the headless machine, but I tried downgrading to 7.0 and was still able to replicate the crash. How reproducible: 100% (within 1-60 seconds of trying to aggravate it by repeatedly going in and out of the remote-viewer window.) Otherwise about once a day. Steps to Reproduce: 1. ssh to other/headless machine sitting 2 feet away with X11 forwarding enabled 2. run remote-viewer to connect via spice to QEMU vm (remote-viewer spice+unix:///<socket file>) 3. wait for about a day, and be unlucky when going in or out of the remote-viewer window; or go in and out as quickly as I can for 1-60 seconds Actual results: remote-viewer crashes showing: [xcb] Unknown sequence number while processing queue [xcb] Most likely this is a multi-threaded client and XInitThreads has not been called [xcb] Aborting, sorry about that. remote-viewer: xcb_io.c:263: poll_for_event: Assertion `!xcb_xlib_threads_sequence_lost' failed. Aborted (core dumped) journalctl shows core was dumped, visible here: http://ix.io/1KpK Expected results: remote-viewer to not crash Additional info: I am reporting to virt-viewer because I haven't been able to replicate this over ssh with X11 forwarding with other GUI programs. This happens whether the guest vm is running in tty without mouse, tty with gpm (text mode mouse), or with an x server. After writing a script to repeatedly move the mouse up (xdotool mousemove_relative -- -2 -10) 30 times and down (xdotool mousemove_relative 2 10) 30 times, I verified it would still reproduce the crash. Then, I tried letting it run for 10 minutes with another window being active but not even covering the remote-viewer window, but it failed to crash. The guest vm isn't using the qemu guest agent. I'm thinking it's probably related to The actual system I'm physically using and the headless one I'm ssh'ing into are 2 feet away. They have a direct InfiniBand connection (no switches involved) which provides ultra low latency and high bandwidth. That's usually what I connect through (using IP over Infiniband.) But, I've replicated this connecting via ssh over gigabit ethernet. I run most of my VM's on the headless system with remote-viewer over ssh with X11 trusted forwarding. But, I created a VM on the system I'm physically using, and am unable to replicate the crash. I temporarily installed an x server on the typically headless system and ran remote-viewer directly on it, but was unable to replicate the crash. So, it definitely seems limited to being ran over ssh with X11 trusted forwarding. I'll mention the two systems are identical (hardware and software version wise) except the headless machine only has its onboard video rather than a Radeon Vega 64. To be clear, running remote-viewer on the system I'm physically using to a VM on it does not crash. It's just the specific remote-viewer that crashes, not the VM itself, and not other remote-viewers running within the same ssh session. I tried using xtrace (the one that sets up a proxy x server to trace client/server communications, not the one included with glibc) but even over X11 forwarding, it prevents being able to replicate the crash. Guessing it introduces a timing change that prevents it.
I went back and let the xdotool-based mouse movement run for an extremely long time, and actually reproduced a crash under it. Perhaps it just changes the timings enough to make it less likely. The entire xtrace log is 172MB and is available upon request, but the last 1,000 lines (315K) is viewable here: http://ix.io/1XxD If you search for "xcb" in the linked partial log, you'll come to line 926/1000 which is the origianl error I posted. ---------- Also, in case it helps, qemu command line I'm using is: /usr/bin/qemu-system-x86_64 \ -name crash,process=qemu:crash \ -no-user-config \ -nodefaults \ -nographic \ -uuid 4a30c830-80d5-4b88-a79f-e6ad4e44a7fe \ -pidfile /tmp/vm_crash.pid \ -machine q35,accel=kvm,vmport=off,dump-guest-core=off \ -cpu SandyBridge-IBRS \ -smp cpus=4,cores=2,threads=1,sockets=2 \ -m 4G \ -drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/OVMF_CODE.fd \ -drive if=pflash,format=raw,readonly,file=/var/qemu/efivars/vm_crash.fd \ -monitor telnet:localhost:8000,server,nowait,nodelay \ -spice unix,addr=/tmp/spice.crash.sock,disable-ticketing \ -device ioh3420,id=pcie.1,bus=pcie.0,slot=0 \ -device qxl-vga,bus=pcie.0,addr=2,ram_size_mb=64,vram_size_mb=8,vgamem_mb=16,max_outputs=1 \ -usbdevice tablet \ -netdev bridge,id=network0,br=br0 \ -device virtio-net-pci,netdev=network0,mac=94:f9:7b:a9:15:a4,bus=pcie.0,addr=3 \ -device virtio-scsi-pci,id=scsi1 \ -drive driver=raw,node-name=hd0,file=/dev/newLvm/vm_crash,if=none,discard=unmap,cache=none,aio=threads \ -device scsi-hd,drive=hd0,bootindex=1
Found this bug by googling 'spice-gtk xcb_xlib_threads_sequence_lost'. Thanks for the detailed report! FWIW we have had a lot of crashes in virt-manager too that may be related, see all the dupes here: https://bugzilla.redhat.com/show_bug.cgi?id=1756065 And another bug that I am actively working with the reporter to try and identify, from the virt-manager perspective, where the crash always happens on VM window interaction. https://bugzilla.redhat.com/show_bug.cgi?id=1792576 FWIW virt-viewer doesn't have any native usage of threads AFAICT, so if the app is crashing like this then it is something lower level than virt-viewer, spice-gtk is my guess. Can you still reproduce easily with your current setup? Can you verify what distro and package versions you can still reproduce with? It would help to get a gdb backtrace. If you are on Fedora, you can do: sudo dnf debuginfo-install virt-viewer gdb --eval-command=run --args virt-viewer [insert your args] Reproduce the crash (virt-viewer will freeze), then go back to the gdb terminal and run thread apply all bt And attach the whole result here
Note, I see now that you have the backtrace in journalctl, but lack of debuginfo makes it not too useful, so if you can reproduce in gdb that will help. gdb could make it hard to reproduce, so maybe just installing debuginfo packages and reproducing like normal will give a more useful backtrace in the logs
Created attachment 1655204 [details] log that was at http://ix.io/1XxD, just to have everything in one place log that was at http://ix.io/1XxD, just to have everything in one place
A gdb or stack trace could be not so interesting if the other (not main) thread do its job and the main one (as almost all reports are showing) report the unknown sequence. From the partial xtrace attached here it seems that some focus in/out was happening and also that there has been some freeing of image which may suggest that mouse movements were close to the viewer border.
I finally got an easy way to reproduce. After compiling from source xtrace (from Debian sources) and installing it, simply a $ xtrace -o trace -n strace -f remote-viewer spice://localhost:5900 and move the mouse on the windows between above the remote viewer windows and inside (guest part). After repeating for a while (like 20/30 times maximum maybe) back and forth the crash happens. I can see just before the xcb error another thread doing something. I can see that the thread is doing something with dbus but I cannot understand how this affects xcb (it interacts with a different file descriptor). I just picked up a random VM (currently Windows 7) on a Fedora 30 client/host. I can confirm this happens with both stock and master spice-gtk. Not saying that this should be reliable for everyone, probably I'm just lucky that on my machine happens so reliably (and I'm trying to understand it before a restart). I tried to compile out the spice integration code on spice-gtk (the only part using directly dbus) but it didn't help out).
It's a bug in libx11, specifically poll_for_response. Code is 273 static xcb_generic_reply_t *poll_for_response(Display *dpy) 274 { 275 void *response; 276 xcb_generic_error_t *error; 277 PendingRequest *req; 278 while(!(response = poll_for_event(dpy, False)) && 279 (req = dpy->xcb->pending_requests) && 280 !req->reply_waiter) 281 { 282 uint64_t request; 283 284 if(!xcb_poll_for_reply64(dpy->xcb->connection, req->sequence, 285 &response, &error)) { 286 /* xcb_poll_for_reply64 may have read events even if 287 * there is no reply. */ 288 response = poll_for_event(dpy, True); 289 break; 290 } 291 292 request = X_DPY_GET_REQUEST(dpy); 293 if(XLIB_SEQUENCE_COMPARE(req->sequence, >, request)) 294 { 295 throw_thread_fail_assert("Unknown sequence number " 296 "while awaiting reply", 297 xcb_xlib_threads_sequence_lost); 298 } 299 X_DPY_SET_LAST_REQUEST_READ(dpy, req->sequence); 300 if(response) 301 break; 302 dequeue_pending_request(dpy, req); 303 if(error) 304 return (xcb_generic_reply_t *) error; 305 } 306 return response; 307 } is it possible that when poll_for_event is called there are no events (still to arrive), but there are pending_requests (as these are added when requests are sent without waiting for reply from server), then when xcb_poll_for_reply64 is called replies and events are read so last_request_read is set to request sequence which is higher than event skipped in this function. After when poll_for_event is called the event is fetched, sequence number of event is passed to widen function which will add 0x100000000 (on 64 bit) which will cause XLIB_SEQUENCE_COMPARE(event_sequence, >, request) to be triggered. So this has nothing to do with threads. Too late today to post proper bug to libX11.
*** Bug 1792576 has been marked as a duplicate of this bug. ***
Created attachment 1656189 [details] Proposed fix See https://gitlab.freedesktop.org/xorg/lib/libx11/merge_requests/34
*** Bug 1760388 has been marked as a duplicate of this bug. ***
*** Bug 1756065 has been marked as a duplicate of this bug. ***
*** Bug 1762284 has been marked as a duplicate of this bug. ***
*** Bug 1739166 has been marked as a duplicate of this bug. ***
*** Bug 1771504 has been marked as a duplicate of this bug. ***
*** Bug 1726916 has been marked as a duplicate of this bug. ***
*** Bug 1713723 has been marked as a duplicate of this bug. ***
*** Bug 1708288 has been marked as a duplicate of this bug. ***
*** Bug 1758599 has been marked as a duplicate of this bug. ***
*** Bug 1714382 has been marked as a duplicate of this bug. ***
*** Bug 1729947 has been marked as a duplicate of this bug. ***
*** Bug 1779572 has been marked as a duplicate of this bug. ***
*** Bug 1784722 has been marked as a duplicate of this bug. ***
*** Bug 1749947 has been marked as a duplicate of this bug. ***
*** Bug 1697019 has been marked as a duplicate of this bug. ***
*** Bug 1701704 has been marked as a duplicate of this bug. ***
*** Bug 1705076 has been marked as a duplicate of this bug. ***
*** Bug 1730563 has been marked as a duplicate of this bug. ***
*** Bug 1750750 has been marked as a duplicate of this bug. ***
*** Bug 1752198 has been marked as a duplicate of this bug. ***
*** Bug 1754234 has been marked as a duplicate of this bug. ***
*** Bug 1761635 has been marked as a duplicate of this bug. ***
*** Bug 1762586 has been marked as a duplicate of this bug. ***
*** Bug 1771464 has been marked as a duplicate of this bug. ***
*** Bug 1774779 has been marked as a duplicate of this bug. ***
*** Bug 1776884 has been marked as a duplicate of this bug. ***
*** Bug 1785218 has been marked as a duplicate of this bug. ***
*** Bug 1787167 has been marked as a duplicate of this bug. ***
*** Bug 1789770 has been marked as a duplicate of this bug. ***
*** Bug 1793194 has been marked as a duplicate of this bug. ***
*** Bug 1794196 has been marked as a duplicate of this bug. ***
*** Bug 1761641 has been marked as a duplicate of this bug. ***
*** Bug 1780483 has been marked as a duplicate of this bug. ***
*** Bug 1726379 has been marked as a duplicate of this bug. ***
*** Bug 1722168 has been marked as a duplicate of this bug. ***
*** Bug 1746891 has been marked as a duplicate of this bug. ***
*** Bug 1731643 has been marked as a duplicate of this bug. ***
*** Bug 1788740 has been marked as a duplicate of this bug. ***
*** Bug 1773218 has been marked as a duplicate of this bug. ***
*** Bug 1787161 has been marked as a duplicate of this bug. ***
*** Bug 1767207 has been marked as a duplicate of this bug. ***
*** Bug 1701754 has been marked as a duplicate of this bug. ***
*** Bug 1715594 has been marked as a duplicate of this bug. ***
*** Bug 1756565 has been marked as a duplicate of this bug. ***
*** Bug 1778447 has been marked as a duplicate of this bug. ***
*** Bug 1739343 has been marked as a duplicate of this bug. ***
*** Bug 1772640 has been marked as a duplicate of this bug. ***
*** Bug 1775292 has been marked as a duplicate of this bug. ***
*** Bug 1785854 has been marked as a duplicate of this bug. ***
*** Bug 1754173 has been marked as a duplicate of this bug. ***
*** Bug 1708022 has been marked as a duplicate of this bug. ***
*** Bug 1757611 has been marked as a duplicate of this bug. ***
*** Bug 1765321 has been marked as a duplicate of this bug. ***
*** Bug 1783767 has been marked as a duplicate of this bug. ***
*** Bug 1787214 has been marked as a duplicate of this bug. ***
*** Bug 1731244 has been marked as a duplicate of this bug. ***
*** Bug 1746702 has been marked as a duplicate of this bug. ***
*** Bug 1796661 has been marked as a duplicate of this bug. ***
Similar problem has been detected: Again and again for more than two years - virt-manager crashed when working with virtual machines! THIS IS HAPPENING FOR MORE THAN TWO YEARS YET NOTHING IS HAPPENING. GOING TO SWITCH TO DIFFERENT DISTRIBUTION BECAUSE I AM TIRED OF THIS. SEEMS LOIKE NO ONE IN FEDORA TEAM CARES ANYMORE! reporter: libreport-2.11.3 backtrace_rating: 4 cgroup: 0::/user.slice/user-1000.slice/session-2.scope cmdline: /usr/bin/python3 /usr/share/virt-manager/virt-manager crash_function: poll_for_event executable: /usr/bin/python3.7 journald_cursor: s=92a4313359c6417bbf50c2b1be7f4486;i=493be;b=fed046066f5e456dbf100fe4b408c898;m=638447dbd;t=59e341621b3a9;x=48f532ac596f51ca kernel: 5.4.17-200.fc31.x86_64 package: virt-manager-2.2.1-2.fc31 reason: python3.7 killed by SIGABRT rootdir: / runlevel: N 5 type: CCpp uid: 1000 xsession_errors:
Created attachment 1662076 [details] File: backtrace
This bug appears to have been reported against 'rawhide' during the Fedora 32 development cycle. Changing version to 32.
*** Bug 1806139 has been marked as a duplicate of this bug. ***
Similar problem has been detected: I don't need to fill this report again. It is taking more than 2 zears already and it seems that noone in Fedora cares. Honestly - this bug affected 4 FU***ING releases so why don't you already fix it? And I don't care if this comment is public because I would like community to know how do you treat them. reporter: libreport-2.12.0 backtrace_rating: 4 cgroup: 0::/user.slice/user-1000.slice/session-2.scope cmdline: /usr/bin/python3 /usr/share/virt-manager/virt-manager crash_function: poll_for_event executable: /usr/bin/python3.7 journald_cursor: s=68bcf42be49640358b3ed75f2e3f9d27;i=45327;b=e39a81b230984e58bfd28de21a4c81ee;m=c544e335ff;t=59fdb95ff27d0;x=6dfed1d0edebce06 kernel: 5.4.20-200.fc31.x86_64 package: virt-manager-2.2.1-2.fc31 reason: python3.7 killed by SIGABRT rootdir: / runlevel: N 5 type: CCpp uid: 1000
Sent a new fix for this bug in X11, see updated https://gitlab.freedesktop.org/xorg/lib/libx11/-/merge_requests/34. I found also an easy way to reproduce in no time adding a specific sleep, hope this helps the patch to be accepted.
Similar problem has been detected: Just open a vm display console(spice), and switch window with Alt-Tab. reporter: libreport-2.12.0 backtrace_rating: 4 cmdline: /usr/bin/python3 /usr/share/virt-manager/virt-manager crash_function: poll_for_event executable: /usr/bin/python3.7 journald_cursor: s=e3be113e802b42ad9d7ac893e0a10148;i=1215552;b=4a6ded33d054465fa0a362142d5430e6;m=ce0ddec26e;t=5a0a3c770e78d;x=41420e4986514290 kernel: 5.5.6-100.fc30.x86_64 package: virt-manager-2.1.0-2.fc30 reason: python3.7 killed by SIGABRT rootdir: / runlevel: N 5 type: CCpp uid: 1000 xsession_errors:
Similar problem has been detected: Upon logging into MATE environment this error happens every time. reporter: libreport-2.12.0 backtrace_rating: 3 cgroup: 0::/user.slice/user-1000.slice/session-3.scope cmdline: /usr/bin/python3 /usr/bin/dnfdragora-updater crash_function: poll_for_event executable: /usr/bin/python3.7 journald_cursor: s=4411486244ce4396bdcfa4e6a71b76e8;i=d14;b=c777204659094a508b34bdc1c9d4f223;m=249c50b;t=5a326ba223a13;x=59560d3d5f97d18e kernel: 5.5.15-200.fc31.x86_64 package: dnfdragora-updater-1.1.1-3.fc31 reason: python3.7 killed by SIGABRT rootdir: / runlevel: N 5 type: CCpp uid: 1000 xsession_errors:
*** Bug 1810812 has been marked as a duplicate of this bug. ***
*** Bug 1811052 has been marked as a duplicate of this bug. ***
*** Bug 1812273 has been marked as a duplicate of this bug. ***
*** Bug 1813463 has been marked as a duplicate of this bug. ***
*** Bug 1814675 has been marked as a duplicate of this bug. ***
*** Bug 1815979 has been marked as a duplicate of this bug. ***
*** Bug 1816781 has been marked as a duplicate of this bug. ***
*** Bug 1819230 has been marked as a duplicate of this bug. ***
*** Bug 1811436 has been marked as a duplicate of this bug. ***
*** Bug 1808954 has been marked as a duplicate of this bug. ***
*** Bug 1811780 has been marked as a duplicate of this bug. ***
*** Bug 1818542 has been marked as a duplicate of this bug. ***
*** Bug 1812260 has been marked as a duplicate of this bug. ***
*** Bug 1808158 has been marked as a duplicate of this bug. ***
*** Bug 1826461 has been marked as a duplicate of this bug. ***
Similar problem has been detected: This error happened shortly after logging in. No special action was required. reporter: libreport-2.12.0 backtrace_rating: 3 cgroup: 0::/user.slice/user-1000.slice/session-2.scope cmdline: /usr/bin/python3 /usr/bin/dnfdragora-updater crash_function: poll_for_event executable: /usr/bin/python3.7 journald_cursor: s=398d63cbfa144339b85cd5e37d4eb5ff;i=52ecaa;b=79b3ede045e24f50af7fec2d4f169de8;m=219af2c;t=5a4940ab96dd6;x=ffd90afd02932d4d kernel: 5.6.7-200.fc31.x86_64 package: dnfdragora-updater-1.1.1-3.fc31 reason: python3.7 killed by SIGABRT rootdir: / runlevel: N 5 type: CCpp uid: 1000 xsession_errors:
Similar problem has been detected: potentially this was related to the GNOME session and/or gnome-shell being messed up. reporter: libreport-2.12.0 backtrace_rating: 4 cgroup: 0::/user.slice/user-1000.slice/user/gnome-launched-org.mageia.dnfdragora-updater.desktop-1699227.scope cmdline: /usr/bin/python3 /usr/bin/dnfdragora-updater crash_function: poll_for_event executable: /usr/bin/python3.7 journald_cursor: s=0a17bbef25cc4e9c96aa22f95c3f6b79;i=610a2f;b=9e89a5b6c6054786b6d2d49315f27b4f;m=76481565e5;t=5a64f94935e99;x=11dbbe268bf2665d kernel: 5.6.11-200.fc31.x86_64 package: dnfdragora-updater-1.1.1-3.fc31 reason: python3.7 killed by SIGABRT rootdir: / runlevel: N 5 type: CCpp uid: 1000
*** Bug 1839361 has been marked as a duplicate of this bug. ***
*** Bug 1836515 has been marked as a duplicate of this bug. ***
*** Bug 1833627 has been marked as a duplicate of this bug. ***
*** Bug 1839140 has been marked as a duplicate of this bug. ***
*** Bug 1836527 has been marked as a duplicate of this bug. ***
*** Bug 1834815 has been marked as a duplicate of this bug. ***
*** Bug 1836643 has been marked as a duplicate of this bug. ***
*** Bug 1843246 has been marked as a duplicate of this bug. ***
Similar problem has been detected: I've rebooted the computer for a newer kernel (also after dnf upgrade), and upon login after reboot, dnfdragora apparently crashed. reporter: libreport-2.12.0 backtrace_rating: 4 cgroup: 0::/user.slice/user-1000.slice/user/gnome-launched-org.mageia.dnfdragora-updater.desktop-15241.scope cmdline: /usr/bin/python3 /usr/bin/dnfdragora-updater crash_function: poll_for_event executable: /usr/bin/python3.7 journald_cursor: s=fcd7f11f2bc245ba9210851f77d3e1ca;i=80c25d;b=1aad101ef57e4b42af29ef0092c2f05f;m=86f1443;t=5a75b15de2b8d;x=339e39c7b55cfe4b kernel: 5.6.15-200.fc31.x86_64 package: dnfdragora-updater-1.1.1-3.fc31 reason: python3.7 killed by SIGABRT rootdir: / runlevel: N 5 type: CCpp uid: 1000
*** Bug 1844618 has been marked as a duplicate of this bug. ***
*** Bug 1840843 has been marked as a duplicate of this bug. ***
*** Bug 1845038 has been marked as a duplicate of this bug. ***
*** Bug 1845754 has been marked as a duplicate of this bug. ***
*** Bug 1840288 has been marked as a duplicate of this bug. ***
*** Bug 1886190 has been marked as a duplicate of this bug. ***
*** Bug 1882946 has been marked as a duplicate of this bug. ***
*** Bug 1883364 has been marked as a duplicate of this bug. ***
*** Bug 1886831 has been marked as a duplicate of this bug. ***
*** Bug 1887564 has been marked as a duplicate of this bug. ***
*** Bug 1887378 has been marked as a duplicate of this bug. ***
Similar problem has been detected: Selected button to delete duplicate messages (in IMAP Sent folder). A 'Notice' box displayed that three duplicate messages were deleted. Claws Mail then froze, with the Notice box remaining on screen, then Claws Mail eventually crashed. reporter: libreport-2.13.1 backtrace_rating: 4 cgroup: 0::/user.slice/user-1000.slice/session-3.scope cmdline: claws-mail crash_function: poll_for_event executable: /usr/bin/claws-mail journald_cursor: s=c8d47d9e641f498ba0231ca36962bbc7;i=171e1;b=a89d86c51c7748618b4c6383ecd97052;m=47572f500;t=5b18fecaf1048;x=ee9f2e5a625f0504 kernel: 5.8.14-200.fc32.x86_64 package: claws-mail-3.17.6-1.fc32 reason: claws-mail killed by SIGABRT rootdir: / runlevel: N 5 type: CCpp uid: 1000 xsession_errors:
Similar problem has been detected: Was composing an e-mail. Claws Mail unexpectedly crashed. reporter: libreport-2.13.1 backtrace_rating: 4 cgroup: 0::/user.slice/user-1000.slice/session-3.scope cmdline: claws-mail crash_function: poll_for_event executable: /usr/bin/claws-mail journald_cursor: s=c8d47d9e641f498ba0231ca36962bbc7;i=1727b;b=a89d86c51c7748618b4c6383ecd97052;m=880f4a955;t=5b193f830c49d;x=154212ed5d735d28 kernel: 5.8.14-200.fc32.x86_64 package: claws-mail-3.17.6-1.fc32 reason: claws-mail killed by SIGABRT rootdir: / runlevel: N 5 type: CCpp uid: 1000 xsession_errors:
*** Bug 1889008 has been marked as a duplicate of this bug. ***
*** Bug 1889092 has been marked as a duplicate of this bug. ***
*** Bug 1889148 has been marked as a duplicate of this bug. ***
Similar problem has been detected: Was replying to an e-mail, proceeded to copy and paste some information. Upon attempting to paste the information into the e-mail, Claws Mail crashed. reporter: libreport-2.13.1 backtrace_rating: 4 cgroup: 0::/user.slice/user-1000.slice/session-1.scope cmdline: claws-mail crash_function: poll_for_event executable: /usr/bin/claws-mail journald_cursor: s=e1a8e4f74c0d4155a0411f9c4098d2e5;i=4b2bc;b=7d12f442ca60431b9f264e9e90e7228e;m=6d8d89cf;t=5b231d274193c;x=9aedb74f119cd119 kernel: 5.8.15-201.fc32.x86_64 package: claws-mail-3.17.7-1.fc32 reason: claws-mail killed by SIGABRT rootdir: / runlevel: N 5 type: CCpp uid: 1000 xsession_errors:
Similar problem has been detected: Notification plugin indicated there was new mail. Maximized Claws Mail, to find it completely unresponsive. It eventually crashed. reporter: libreport-2.13.1 backtrace_rating: 4 cgroup: 0::/user.slice/user-1000.slice/session-1.scope cmdline: claws-mail crash_function: poll_for_event executable: /usr/bin/claws-mail journald_cursor: s=e1a8e4f74c0d4155a0411f9c4098d2e5;i=51ae7;b=afff16d263c64ad4bf2b5d06cdbc99aa;m=b05dc700;t=5b25597b0c986;x=266d622a56aeae7d kernel: 5.8.15-201.fc32.x86_64 package: claws-mail-3.17.7-1.fc32 reason: claws-mail killed by SIGABRT rootdir: / runlevel: N 5 type: CCpp uid: 1000 xsession_errors:
Similar problem has been detected: Was typing a reply to an e-mail when Claws Mail unexpectedly crashed. reporter: libreport-2.13.1 backtrace_rating: 4 cgroup: 0::/user.slice/user-1000.slice/session-1.scope cmdline: claws-mail crash_function: poll_for_event executable: /usr/bin/claws-mail journald_cursor: s=e1a8e4f74c0d4155a0411f9c4098d2e5;i=5649f;b=2b5c918fb3fa44049d7a083fa2873a84;m=a14273d0;t=5b25c7b6cc820;x=884ce4170120c1f7 kernel: 5.8.15-201.fc32.x86_64 package: claws-mail-3.17.7-1.fc32 reason: claws-mail killed by SIGABRT rootdir: / runlevel: N 5 type: CCpp uid: 1000 xsession_errors:
As an end-user of Fedora, would it be asking too much, if the patch/fix that was submitted to X11 upstream, mentioned in Comment 9, could be tested and incorporated into the Fedora packages, without waiting for upstream? Thank you.
*** Bug 1891233 has been marked as a duplicate of this bug. ***
*** Bug 1892208 has been marked as a duplicate of this bug. ***
FEDORA-2020-3b7a70c0ff has been submitted as an update to Fedora 33. https://bodhi.fedoraproject.org/updates/FEDORA-2020-3b7a70c0ff
FEDORA-2020-3b7a70c0ff has been pushed to the Fedora 33 testing repository. In short time you'll be able to install the update with the following command: `sudo dnf upgrade --enablerepo=updates-testing --advisory=FEDORA-2020-3b7a70c0ff` You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2020-3b7a70c0ff See also https://fedoraproject.org/wiki/QA:Updates_Testing for more information on how to test updates.
FEDORA-2020-3b7a70c0ff has been pushed to the Fedora 33 stable repository. If problem still persists, please make note of it in this bug report.