Bug 2235636
| Summary: | python3-file-magic segfaults when concurrently calling magic.detect_from_filename method | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 9 | Reporter: | Pavel Moravec <pmoravec> |
| Component: | file | Assignee: | Vincent Mihalkovič <vmihalko> |
| Status: | CLOSED MIGRATED | QA Contact: | qe-baseos-daemons |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 9.2 | CC: | kdudka, lzaoral |
| Target Milestone: | rc | Keywords: | MigratedToJIRA |
| Target Release: | --- | Flags: | pm-rhel:
mirror+
|
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-09-20 23:56:01 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
I have to correct one statement: the segfault happens on RHEL8 as well. I can reproduce it inside sos, not in standalone reproducer, though. But the backtrace is:
Thread 13:
#22 0x00007ffff751d448 in PyEval_EvalFrameEx (throwflag=0,
f=Frame 0x55555582b958, for file /usr/lib/python3.6/site-packages/magic.py, line 148, in file (self=<Magic(_magic_t=<LP_magic_set at remote 0x7fffe5dc2598>) at remote 0x7fffe5dbffd0>, filename='/var/tmp/sos.2wl7bvuh/cleaner/sosreport-pmoravec-rhel8-8675309-2023-08-29-ektgqjy/usr/lib/systemd/user/gpg-agent-extra.socket'))
at /usr/src/debug/python3-3.6.8-47.el8.x86_64/Python/ceval.c:752
..
#27 0x00007ffff751d448 in PyEval_EvalFrameEx (throwflag=0,
f=Frame 0x7fffc8000dc8, for file /usr/lib/python3.6/site-packages/magic.py, line 264, in detect_from_filename (filename='/var/tmp/sos.2wl7bvuh/cleaner/sosreport-pmoravec-rhel8-8675309-2023-08-29-ektgqjy/usr/lib/systemd/user/gpg-agent-extra.socket')) at /usr/src/debug/python3-3.6.8-47.el8.x86_64/Python/ceval.c:752
..
#32 0x00007ffff749d436 in PyEval_EvalFrameEx (throwflag=0,
f=Frame 0x7fffdc13d8b8, for file /usr/lib/python3.6/site-packages/sos/utilities.py, line 100, in file_is_binary (fname='/var/tmp/sos.2wl7bvuh/cleaner/sosreport-pmoravec-rhel8-8675309-2023-08-29-ektgqjy/usr/lib/systemd/user/gpg-agent-extra.socket')) at /usr/src/debug/python3-3.6.8-47.el8.x86_64/Python/ceval.c:4167
..
Thread 15:
same backtrace, just different filename
(and no other running thread there)
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug. This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there. Due to differences in account names between systems, some fields were not replicated. Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information. To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer. You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like: "Bugzilla Bug" = 1234567 In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information. |
Description of problem: Concurrently calling magic.detect_from_filename(..) by different threads, even on independent files, does often end up in segfault. The segfault is *not* observed on RHEL8 (python3-magic-5.33-24.el8.noarch), it sometimes happens on RHEL9.1 (python3-file-magic-5.39-12.el9.noarch.rpm as well as python3-file-magic-5.39-14.el9.noarch.rpm), and VERY often on Gemini kernels of RHEL9.3 beta (not sure if Gemini or 9.3 affects the frequency). Version-Release number of selected component (if applicable): python3-file-magic-5.39-12.el9.noarch How reproducible: 20% to 100% Steps to Reproduce: 1. Have this script: from concurrent.futures import ThreadPoolExecutor import os import magic jobs = 4 report_paths = ['/var/tmp/dir1', '/var/tmp/dir2'] def obfuscate_report(archive): print(archive) for dirname, dirs, files in os.walk(archive): for filename in files: print(f" filename={filename}") _fname = os.path.join(dirname, filename.lstrip('/')) print(f"{_fname}: {magic.detect_from_filename(_fname)}") return print("===") pool = ThreadPoolExecutor(jobs) pool.map(obfuscate_report, report_paths, chunksize=1) pool.shutdown(wait=True) print("===") 2. Generate 100 text files in the two directories (each directory content will be examined by magic.detect_from_filename in separate thread): rm -rf /var/tmp/dir1 /var/tmp/dir2 mkdir /var/tmp/dir1 for i in $(seq 1 100); do date > /var/tmp/dir1/date.${i}.txt; done cp -r /var/tmp/dir1 /var/tmp/dir 3. Run the script: # python3 segfault_reproducer.py Actual results: # python3 segfault_reproducer.py === /var/tmp/dir1 filename=date.1.txt /var/tmp/dir2 filename=date.51.txt === tcache_thread_shutdown(): unaligned tcache chunk detected Aborted (core dumped) # (depending on RHEL/kernel version, the segfault might not always happen; it must be a race condition) Expected results: No segfault Additional info: Our real use case: sos / sosreport has a feature to obfuscate customer sensitive data in provided sosreport(s). That requires detection of file types (treat binary files differently than text files). When running the cleaner concurrently on multiple sosreports, we use ThreadPoolExecutor for the concurrency. And hit the segfaults with backtraces pointing to magic library. Providing sosreports is very important for Red Hat support, so segfaults like these slows down investigation of support cases. Thus we treat this BZ with high priority.