RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 707944 - Bad interaction between python-2.6.5-ctypes-noexecmem.patch, and ctypes.CFUNCTYPE when forking without exec (e.g. "multiprocessing" module)
Summary: Bad interaction between python-2.6.5-ctypes-noexecmem.patch, and ctypes.CFUNC...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: python
Version: 6.0
Hardware: All
OS: Linux
medium
high
Target Milestone: rc
: ---
Assignee: Dave Malcolm
QA Contact: Petr Šplíchal
URL:
Whiteboard:
: 819810 (view as bug list)
Depends On:
Blocks: 782183
TreeView+ depends on / blocked
 
Reported: 2011-05-26 11:57 UTC by Francisco Lopez
Modified: 2018-12-01 18:58 UTC (History)
11 users (show)

Fixed In Version: python-2.6.6-36.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-02-21 10:31:33 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
heart scale msgs (754 bytes, application/octet-stream)
2011-05-26 11:57 UTC, Francisco Lopez
no flags Details
Backtrace from the reproducer in comment #13 (2.00 KB, text/plain)
2011-06-11 00:24 UTC, Dave Malcolm
no flags Details
C source for a minimal reproducer that doesn't involve python (1.06 KB, text/plain)
2011-06-13 04:10 UTC, Dave Malcolm
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:0437 0 normal SHIPPED_LIVE python bug fix update 2013-02-20 20:48:31 UTC

Description Francisco Lopez 2011-05-26 11:57:28 UTC
Created attachment 501049 [details]
heart scale msgs

Description of problem:

Actual Version of python used on RHEL6 remains with "zombie processes" even when the job has been done. 

````````````````````````````````````````````````````````````````````````````````
from multiprocessing import Pool
import time, sys
import libsvm
from libsvm import svmutil

def f(x):
    y, x = svmutil.svm_read_problem('/myhomedir/heart_scale')
    m = svmutil.svm_train(y[:200], x[:200], '-c 4 -q')

#    for i in range(len(y[200:])):
#        file = open("/myhomedir/libsvm.txt","a")
#        sys.stdout = file#
#
#        #p_label, p_acc, p_val = svmutil.svm_predict([y[200+i]],[x[200+i]], m)
#
#        file.close()
#        sys.stdout = sys.__stdout__

    return 1

if __name__ == '__main__':
    pool = Pool(processes=12)              # start 4 worker processes
    for i in range(1000):
       result = pool.apply_async(f, [i])     # evaluate "f(10)" asynchronously
    pool.close()
    pool.join()

````````````````````````````````````````````````````````````````````````````````
This is what the heart scale file looks like. 

1:0.458333 2:1 3:1 4:-0.358491 5:-0.374429 6:-1 7:-1 8:-0.480916 9:1 10:-0.935484 12:-0.333333 13:1 


This script works 100% correct when using the source package from python.org. 

 

Version-Release number of selected component (if applicable):

python-2.6.5-3.el6_0.2.src.rpm

How reproducible:

Always

Steps to Reproduce:

1. Execute the script above

2. make sure that your environment points to the correct python version. (python-2.6.5-3.el6_0.2.src.rpm)

3. 
  
Actual results:

Zombie Processes

Expected results:

(Child)Processes should be killed after the job exits. 
---- Clean Exit ----

Additional info:

The original source package from www.python.org (version 2.5.6) does the job without any problems.

Comment 3 Dave Malcolm 2011-06-03 00:12:15 UTC
Current status:
===============
I'm able to reproduce this on my RHEL6 box, currently using:
  python-2.6.6-9.el6.x86_64
  libsvm-python-3.0-2.el6.x86_64.rpm from EPEL6
where the workload:
  - works OK when run against a vanilla rebuild of the Python-2.6.6 tarball
  - hangs when using our /usr/bin/python, after all of the work is done (with 12 zombie worker processes).

I'm investigating further; my current hypothesis is that the difference is within the ctypes module (or perhaps libffi).


Notes to self:
==============
libsvm seems to be: http://www.csie.ntu.edu.tw/~cjlin/libsvm/
   From the PDF paper:
     "LIBSVM is a library for Support Vector Machines (SVMs)"
     "Support Vector Machines (SVMs) are a popular machine learning method for classification, regression, and other learning tasks."

It's in Fedora and EPEL:
  https://admin.fedoraproject.org/pkgdb/acls/name/libsvm

My initial attempts to reproduce got this traceback:
Traceback (most recent call last):
  File "/usr/lib64/python2.6/pdb.py", line 1296, in main
    pdb._runscript(mainpyfile)
  File "/usr/lib64/python2.6/pdb.py", line 1215, in _runscript
    self.run(statement)
  File "/usr/lib64/python2.6/bdb.py", line 372, in run
    exec cmd in globals, locals
  File "<string>", line 1, in <module>
  File "test_pool_svm.py", line 25, in <module>
    f(1)
  File "test_pool_svm.py", line 8, in f
    y, x = svmutil.svm_read_problem('heart_scale.txt')
  File "/usr/lib64/python2.6/site-packages/libsvm/svmutil.py", line 23, in svm_read_problem
    prob_y += [float(label)]
ValueError: invalid literal for float(): 1:0.458333

From docstring:
  9  		Read LIBSVM-format data from data_file_name and return labels y
 10  		and data instances x.

It looks like the input file format expects a series of lines of the form
  FLOAT ONE_OF_MORE(INT COLON FLOAT WHITESPACE)

Having edited the input file to look like this:

0 1:0.458333 2:1 3:1 4:-0.358491 5:-0.374429 6:-1 7:-1 8:-0.480916 9:1
1 10:-0.935484 12:-0.333333 13:1

I can reproduce the bug, with:
  python-2.6.6-9.el6.x86_64
  libsvm-python-3.0-2.el6.x86_64.rpm from EPEL6

Symptom: the parent process seems to hang, after all of the work is done, with 12 zombie processes.

Caveat: if the worker processes can't find the input file, they all silently fail, and the process runs to completion without hanging (or any zombies).

I tried a build of the vanilla Python-2.6.6 tarball, with:
      ./configure --enable-shared --enable-unicode=ucs4
[david@rhel6 Python-2.6.6-vanilla]$ ln -s /usr/lib64/python2.6/site-packages/libsvm Modules/libsvm

Can get more debug information by adding:
  from multiprocessing.util import log_to_stderr
  log_to_stderr(1)
to the test script.

Works with vanilla 2.6.6; no hang:
[david@rhel6 Python-2.6.6-vanilla]$ LD_LIBRARY_PATH=$(pwd) PYTHONPATH=$(pwd) ./python ~/test_pool_svm.py

Hangs with with RHEL6 /usr/bin/python:
[david@rhel6 Python-2.6.6-vanilla]$ PYTHONPATH=Modules python ~/test_pool_svm.py

My initial hypotheses about the differences:
  Is it libpython?
  Is it the C extension modules?
  Is it the .py libraries?
  Something else?

Investigation of these hypotheses:

Works with:
[david@rhel6 Python-2.6.6-vanilla]$ PYTHONPATH=$(pwd)/Modules:$(pwd)/build/lib.linux-x86_64-2.6 ./python ~/test_pool_svm.py

Works with vanilla C extension modules and .py libraries, with system libpython and /usr/bin/python:
[david@rhel6 Python-2.6.6-vanilla]$ PYTHONPATH=$(pwd)/Modules:$(pwd)/build/lib.linux-x86_64-2.6 /usr/bin/python ~/test_pool_svm.py

Works with our /usr/bin/python with the rebuild vanilla C extension modules:
[david@rhel6 Python-2.6.6-vanilla]$ PYTHONPATH=$(pwd)/build/lib.linux-x86_64-2.6 /usr/bin/python ~/test_pool_svm.py

Hangs with:
[david@rhel6 Python-2.6.6-vanilla]$ /usr/bin/python ~/test_pool_svm.py

Hangs with:
[david@rhel6 Python-2.6.6-vanilla]$ PYTHONPATH=$(pwd)/Lib:/usr/lib64/python2.6/lib-dynload ./python ~/test_pool_svm.py

This suggests to me that there's a difference between our compiled C extension modules vs a vanilla build of those extension modules.

I tried repeating the test in a loop, starting with the modules from the vanilla build, and repeatedly adding more and more modules from the RHEL6 rpm; it started hanging upon adding the RHEL6 "_ctypes.so".  For reference, here's the method I used:

  mkdir my-modules
  for f in /usr/lib64/python2.6/lib-dynload/*.so ; do
     echo $f
     cp $f my-modules/
     PYTHONPATH=my-modules ./python ~/test_pool_svm.py
  done

It works when using the vanilla build of _ctypes.so, but fails (with the hang) with the rpm-supplied build.

I notice that within the libsvm python code, svm.py uses ctypes; maybe it's related to that.

Am investigating further.

Comment 5 Dave Malcolm 2011-06-10 18:38:20 UTC
My current hypothesis is that this is a bad interaction between how libffi allocates closures and the fork-without-exec behavior that python's multiprocessing module uses.

A vanilla python build of ctypes allocates closures via the functions:
  MallocClosure
and
  FreeClosure
found within:
  Python-2.6.6/Modules/_ctypes/malloc_closure.c

These are implemented by using an anonymous MMAP for each closure.

In RHEL6's python, we apply this patch:
  python-2.6.5-ctypes-noexecmem.patch
which instead allocates closures using libffi's functions:
  ffi_closure_alloc
and
  ffi_closure_free
The aim of this patch is to better interact with SELinux: to keep the executable mapping of the closure distinguished from the writable mapping.

libffi-3.0.5/src/closures.c implements ffi_closure_alloc/free on top of a mmapped tempfile.  There appears to be this global state:
   /* A file descriptor of a temporary file from which we'll map
     executable pages.  */
   static int execfd = -1;
which is used to lazily create a tempfile in the filesystem.

What I believe is happening is this:
  - the first call to ctypes (and thus libffi) is happening before the worker
  processes are created.  This means that execfd opens a particular file
  in the filesystem, and the libffi allocator creates a heap based on a memory
  map of that file.
  
  - multiprocessing creates the worker processes, forking without exec

  - the execfd for all of the worker processes are now all memory-mapping the
  same tempfile, and as they claim and release ffi closures, they are all
  simultaneously manipulating this heap area as if they had sole ownership of
  it.

  - at some point, the pool of tasks is completed, and the first worker process
  receives the "no more work" sentinel from the Pool.

  - this first worker exits, and frees up the closure.  This writes back to the
  shared space.

  - as each subsequent worker W exits (or exits the ctypes call within
  libsvm.svmutil.svm_train), it tries to either use or free the closure, but 
  that's already been modified by the other worker, so W segfaults

  - the Pool's result handler thread stops receiving messages from the
  segfaulted workers, and instead waits forever for them

  - the segfaulted worker processes are dead, but haven't had waitpid called on
  them, so become zombies.

Comment 6 Dave Malcolm 2011-06-10 18:43:05 UTC
Workaround:
I can reliably reproduce the hang on my test box.

If I move these lines:

  import libsvm
  from libsvm import svmutil

from global scope to within f(), then the problem goes away, and it reliably terminates without zombies.

This would seem to corroborate the hypothesis in comment #5.

Comment 7 Dave Malcolm 2011-06-10 20:20:15 UTC
More notes:

Within the reproducer I renamed:
   f(x)
to:
   f(idx)
to avoid the index of the task being shadowed by the return from svm_read_problem

I also added:

  from multiprocessing.util import debug
  from multiprocessing.util import log_to_stderr
  log_to_stderr(1)

to the top of the reproducer.

This adds detailed logging to stderr for all of the processes.

With a vanilla build of ctypes, filtering the log for a particular "PoolWorker-N" gives a final log like this:
  [INFO/PoolWorker-9] child process calling self.run()
  [DEBUG/PoolWorker-9] worker got sentinel -- exiting
  [INFO/PoolWorker-9] process shutting down
  [DEBUG/PoolWorker-9] running all "atexit" finalizers with priority >= 0
  [DEBUG/PoolWorker-9] running the remaining "atexit" finalizers
  [INFO/PoolWorker-9] process exiting with exitcode 0

With RHEL6's ctypes, this filtered log shows that the "got sentinel" only seems to happen for one of the PoolWorker instances in the hanging test.  (This happens when the worker is fed a None task, the sentinel).  Every other worker's final log entry is the:
  child process calling self.run()
line.

Comment 8 Dave Malcolm 2011-06-10 21:31:20 UTC
I rebuilt the python src.rpm without applying this patch:
   python-2.6.5-ctypes-noexecmem.patch
and verified that with that build's version of ctypes.so that the reproducer does not hang  (this is built against the system copy of libffi, not the copy embedded in the python src tarball).

Comment 9 Dave Malcolm 2011-06-10 23:05:37 UTC
Adding this to the tail of f():

    with open('/proc/self/maps', 'r') as f:
        for line in f.readlines():
            if '/tmp/ffi' in line:
                debug('TASK %i: %s' % (idx, line))

(with the changes in comment #7) demonstrates that with RHEL6's build of ctypes that the worker process are all mapping the same tempfile (and the same one as the Pool process).

Comment 10 Dave Malcolm 2011-06-10 23:42:52 UTC
Here's a more minimal reproducer, which hangs with the same symptoms most of the time (but not 100%).


from multiprocessing import Pool
from multiprocessing.util import debug, log_to_stderr
log_to_stderr(1)

import libsvm
from libsvm import svm

def f(idx):
    debug('TASK %i: about to make parameter' % idx)
    svm.svm_parameter('-c 4 -q')
    debug('TASK %i: finished making parameter' % idx)

if __name__ == '__main__':
    pool = Pool(processes=10)
    for i in range(20):
       result = pool.apply_async(f, [i])
    pool.close()
    pool.join()

Comment 11 Dave Malcolm 2011-06-10 23:49:14 UTC
Omitting the "-q" from the svm_parameter arguments in the above removes the symptoms, and it runs to completion.  This appears to be asssociated with the PRINT_STRING_FUN(print_null) within libsvm.svm.svm_parameter.parse_options

A more minimal reproducer:


from multiprocessing import Pool
from multiprocessing.util import debug, log_to_stderr
log_to_stderr(1)

from libsvm.svm	import PRINT_STRING_FUN, print_null

def f(idx):
    debug('TASK %i: about to make parameter' % idx)
    PRINT_STRING_FUN(print_null)
    debug('TASK %i: finished making parameter' % idx)

if __name__ == '__main__':
    pool = Pool(processes=10)
    for i in range(20):
       result = pool.apply_async(f, [i])
    pool.close()
    pool.join()

Comment 12 Dave Malcolm 2011-06-10 23:58:35 UTC
And here's a reproducer that doesn't need libsvm, just python; running this leads to a hang most of the time on cleanup; the hang doesn't occur if we don't apply
python-2.6.5-ctypes-noexecmem.patch when building python.


from multiprocessing import Pool
from multiprocessing.util import debug, log_to_stderr
log_to_stderr(1)

from ctypes import CFUNCTYPE
VOID_VOID_FUNCTION_TYPE = CFUNCTYPE(None)
def my_nop_callback(s):
    return

def f(idx):
    c_fn = VOID_VOID_FUNCTION_TYPE(my_nop_callback)

if __name__ == '__main__':
    pool = Pool(processes=10)
    for i in range(20):
       result = pool.apply_async(f, [i])
    pool.close()
    pool.join()

Comment 13 Dave Malcolm 2011-06-11 00:17:21 UTC
See 
  http://docs.python.org/library/ctypes.html#callback-functions
for information on CFUNCTYPE.

Here's a reproducer that doesn't use multiprocessing, and just uses "fork":


import ctypes
from ctypes import CFUNCTYPE
VOID_VOID_FUNCTION_TYPE = CFUNCTYPE(None)
def my_nop_callback(s):
    return

import os
pid = os.fork()
c_fn = VOID_VOID_FUNCTION_TYPE(my_nop_callback)
print '%i: c_fn: %s' % (pid, c_fn)
if pid:
    pid, status = os.waitpid(pid, 0)
    print 'status: %r' % status




When run the above, I get output of this kind:

  5790: c_fn: <CFunctionType object at 0x7ff008659ae0>
  0: c_fn: <CFunctionType object at 0x7ff008659ae0>
  status: 0
  Aborted (core dumped)

But I if build python's ctypes without python-2.6.5-ctypes-noexecmem.patch, then I don't get the core dump:

  5827: c_fn: <CFunctionType object at 0x7f7396891ae0>
  0: c_fn: <CFunctionType object at 0x7f7396891ae0>
  status: 0

Comment 14 Dave Malcolm 2011-06-11 00:24:41 UTC
Created attachment 504224 [details]
Backtrace from the reproducer in comment #13

Comment 15 Dave Malcolm 2011-06-11 00:53:52 UTC
Looking at that backtrace, the "abort" in frame #1 comes from here in frame #2:

/usr/src/debug/libffi-3.0.5/src/dlmalloc.c:
  ....
  4282      erroraction:
  4283        USAGE_ERROR_ACTION(fm, p);
  ....
which appears to be a error-handler for when libffi's allocator is corrupt (or passed a bad pointer).

This happens whilst Python is cleaning up a CThunkObject instance as the Python process exits, as part of cleaning up a CFuncPtr.

Reassigning component to libffi.

libffi maintainers: is libffi_closure_alloc/free able to deal with processes that fork without exec, or do we run the risk of cross-process heap corruption?  (See the hypothesis in comment #5).  Is there a way to make it work under such circumstances? (whilst preserving the SELinux compatibility)

Comment 17 Dave Malcolm 2011-06-11 03:41:21 UTC
Workaround (elaborating on comment #6):
If you avoid any ctypes code until after the fork, the problem may be averted.

Note that importing libsvm.svm uses such code, and libsvm.svmutil imports the former.

For this reason, delaying:
  from libsvm import svmutil
until inside the multiprocessing worker callback may fix this.

Comment 18 Dave Malcolm 2011-06-13 04:10:57 UTC
Created attachment 504354 [details]
C source for a minimal reproducer that doesn't involve python

Here's a more minimal reprodcuer, which doesn't use python: a process allocates a closure, forks, then the child releases the closure, then the parent tries to release the closure.

On this RHEL 6 box (with libffi-3.0.5-3.2.el6.x86_64), the parent process SIGBARTs with this traceback:

#0  0x0000003b6dc329a5 in raise () from /lib64/libc.so.6
#1  0x0000003b6dc34185 in abort () from /lib64/libc.so.6
#2  0x00007ffff7de72b3 in dlfree (ptr=<value optimized out>) at src/dlmalloc.c:4283
#3  ffi_closure_free (ptr=<value optimized out>) at src/closures.c:529
#4  0x00000000004008b1 in main (argc=1, argv=0x7fffffffe708) at libffi-alloc-and-fork.c:51

Comment 24 RHEL Program Management 2011-07-06 01:23:45 UTC
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated
in the current release, Red Hat is unfortunately unable to
address this request at this time. Red Hat invites you to
ask your support representative to propose this request, if
appropriate and relevant, in the next release of Red Hat
Enterprise Linux. If you would like it considered as an
exception in the current release, please ask your support
representative.

Comment 29 Chris Williams 2012-07-23 19:57:29 UTC
*** Bug 819810 has been marked as a duplicate of this bug. ***

Comment 30 RHEL Program Management 2012-09-07 05:23:15 UTC
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated
in the current release, Red Hat is unable to address this
request at this time.

Red Hat invites you to ask your support representative to
propose this request, if appropriate, in the next release of
Red Hat Enterprise Linux.

Comment 31 tvizirov 2012-10-04 09:49:08 UTC
Yes, I would like this to be included in the next release.

Comment 46 RHEL Program Management 2012-10-11 16:11:08 UTC
This request was evaluated by Red Hat Product Management for
inclusion in a Red Hat Enterprise Linux release.  Product
Management has requested further review of this request by
Red Hat Engineering, for potential inclusion in a Red Hat
Enterprise Linux release for currently deployed products.
This request is not yet committed for inclusion in a release.

Comment 53 Daniel Walsh 2012-12-11 14:06:44 UTC
Looks good to me.

Comment 59 errata-xmlrpc 2013-02-21 10:31:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0437.html


Note You need to log in before you can comment on or make changes to this bug.