Bug 17145 - Mutex locks may be broken.
Mutex locks may be broken.
Status: CLOSED ERRATA
Product: Red Hat Raw Hide
Classification: Retired
Component: glibc (Show other bugs)
1.0
i386 Linux
medium Severity medium
: ---
: ---
Assigned To: Jakub Jelinek
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2000-08-31 15:39 EDT by dtc-rhbug
Modified: 2008-05-01 11:37 EDT (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2000-10-06 03:51:01 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description dtc-rhbug 2000-08-31 15:39:49 EDT
The mutex locks may be rather broken in rawhide glibc-2.1.92-5.
Consistently see pthread_mutex_unlock failing, and inspection
shows what appears to be an inconsistent state in the lock
structure.

(gdb) print allocation_lock 
$1 = {__m_reserved = 0, __m_count = 0, __m_owner = 0xbf3ffc00, __m_kind =
2, 
  __m_lock = {__status = -1090766584, __spinlock = 0}}

The fastlock status is 0xBEFC3908 which suggests the fastlock is held
by a different thread to the owner, preventing the owner releasing the
lock. The kind 2 is correct, as this is an error checking mutex.

The application does work reliably under the RedHat glibc-2.1.3-15,
although it is developmental and could be broken.

Regards
Douglas Crosher
Comment 1 dtc-rhbug 2000-09-01 01:38:57 EDT
Have made some progress tracking this one down, and a potential patch
is included below. Someone who knows more about the transition to the
alternative fastlocks should check this one.

 -=-=-

o When unlocking an error-check mutex and checking that it is not already
  free, test that the fastlock status is zero rather than just testing the
  low bit. The error-check mutex uses the alternate fastlock for which the
  low bit is not a busy flag.

*** linuxthreads/mutex.c.orig	Thu Aug  3 19:31:04 2000
--- linuxthreads/mutex.c	Fri Sep  1 16:23:09 2000
***************
*** 171,177 ****
      __pthread_unlock(&mutex->__m_lock);
      return 0;
    case PTHREAD_MUTEX_ERRORCHECK_NP:
!     if (mutex->__m_owner != thread_self() || (mutex->__m_lock.__status & 1) ==
0)
        return EPERM;
      mutex->__m_owner = NULL;
      __pthread_alt_unlock(&mutex->__m_lock);
--- 171,177 ----
      __pthread_unlock(&mutex->__m_lock);
      return 0;
    case PTHREAD_MUTEX_ERRORCHECK_NP:
!     if (mutex->__m_owner != thread_self() || mutex->__m_lock.__status == 0)
        return EPERM;
      mutex->__m_owner = NULL;
      __pthread_alt_unlock(&mutex->__m_lock);
Comment 2 Jakub Jelinek 2000-09-06 04:12:01 EDT
Ulrich Drepper checked this into CVS, will appear as soon as next rpms will
be made
Comment 3 dtc-rhbug 2000-09-29 18:25:05 EDT
In case it wasn't already noted, pthread_mutex_destroy may have a
similar problem, although less serious:

o When destroying a mutex implemented by an alternate fastlock, the
  busy check should test that the fastlock status is zero rather than
  just testing the low bit because the low bit is not a busy flag.

*** mutex.c~	Thu Aug  3 19:31:04 2000
--- mutex.c	Sat Sep 30 09:04:03 2000
***************
*** 38,45 ****
  
  int __pthread_mutex_destroy(pthread_mutex_t * mutex)
  {
!   if ((mutex->__m_lock.__status & 1) != 0) return EBUSY;
!   return 0;
  }
  strong_alias (__pthread_mutex_destroy, pthread_mutex_destroy)
  
--- 38,57 ----
  
  int __pthread_mutex_destroy(pthread_mutex_t * mutex)
  {
!   switch(mutex->__m_kind) {
!   case PTHREAD_MUTEX_ADAPTIVE_NP:
!   case PTHREAD_MUTEX_RECURSIVE_NP:
!     if ((mutex->__m_lock.__status & 1) != 0)
!       return EBUSY;
!     return 0;
!   case PTHREAD_MUTEX_ERRORCHECK_NP:
!   case PTHREAD_MUTEX_TIMED_NP:
!     if (mutex->__m_lock.__status != 0)
!       return EBUSY;
!     return 0;
!   default:
!     return EINVAL;
!   }
  }
  strong_alias (__pthread_mutex_destroy, pthread_mutex_destroy)
  
Comment 4 Jakub Jelinek 2000-10-06 03:50:56 EDT
I think you're right and so thinks Ulrich Drepper, so your patch made it into
CVS. Will appear in the upcoming errata.
Comment 5 Steffen Persvold 2000-11-18 15:04:29 EST
Hmm. My test program shows the same on glibc-2.1.92-14.

However, the DEFAULT mutex type seems to be ~5 times slower than the
other mutex types, even the FAST mutex, which the DEFAULT mutex should
default to!

The upside seems to be that the mutex types other than DEFAULT seems
to be much faster under contention compared to RH 6.2 (glibc-2.1.3-15) 
running on a machine twice as fast (almost)

The test program verfies mutex exclusion using different
mutex types (MUTEXT_TYPE), different number of threads (N_THREADS),
and under different contention (N). N equals the number of mutexes
the threads operates upon (in random order).


Enjoy,

Steffen

Results from RH7.0 running on Dual 450MHz PII Deschutes
MUTEX_TYPE N_THREADS     N  Result  Lock+Unlock
==========================  ===================
   DEFAULT      1        1     OK         0.4us
      FAST      1        1     OK         0.4us
 RECURSIVE      1        1     OK         0.4us
ERRORCHECK      1        1     OK         0.5us
   DEFAULT      2        1     OK         7.3us
      FAST      2        1     OK         1.5us
 RECURSIVE      2        1     OK         1.6us
ERRORCHECK      2        1     mx_test.15204: mx_test.c:46: add: Assertion
`pthread_mutex_unlock(&(p->l)) == 0' failed.

   DEFAULT      3        1     OK        11.9us
      FAST      3        1     OK         1.6us
 RECURSIVE      3        1     OK         1.7us
ERRORCHECK      3        1     mx_test.15204: mx_test.c:46: add: Assertion
`pthread_mutex_unlock(&(p->l)) == 0' failed.

   DEFAULT      1        2     OK         0.4us
      FAST      1        2     OK         0.4us
 RECURSIVE      1        2     OK         0.4us
ERRORCHECK      1        2     OK         0.5us
   DEFAULT      2        2     OK         8.4us
      FAST      2        2     OK         0.7us
 RECURSIVE      2        2     OK         0.8us
ERRORCHECK      2        2     mx_test.15204: mx_test.c:46: add: Assertion
`pthread_mutex_unlock(&(p->l)) == 0' failed.

   DEFAULT      3        2     OK        10.2us
      FAST      3        2     OK         0.6us
 RECURSIVE      3        2     OK         0.6us
ERRORCHECK      3        2     mx_test.15204: mx_test.c:46: add: Assertion
`pthread_mutex_unlock(&(p->l)) == 0' failed.

   DEFAULT      1        4     OK         0.4us
      FAST      1        4     OK         0.4us
 RECURSIVE      1        4     OK         0.4us
ERRORCHECK      1        4     OK         0.5us
   DEFAULT      2        4     OK         5.0us
      FAST      2        4     OK         0.7us
 RECURSIVE      2        4     OK         0.8us
ERRORCHECK      2        4     mx_test.15204: mx_test.c:46: add: Assertion
`pthread_mutex_unlock(&(p->l)) == 0' failed.

   DEFAULT      3        4     OK         6.0us
      FAST      3        4     OK         0.6us
 RECURSIVE      3        4     OK         0.6us
ERRORCHECK      3        4     mx_test.15204: mx_test.c:46: add: Assertion
`pthread_mutex_unlock(&(p->l)) == 0' failed.

   DEFAULT      1        8     OK         0.4us
      FAST      1        8     OK         0.4us
 RECURSIVE      1        8     OK         0.4us
ERRORCHECK      1        8     OK         0.5us
   DEFAULT      2        8     OK         2.6us
      FAST      2        8     OK         0.5us
 RECURSIVE      2        8     OK         0.6us
ERRORCHECK      2        8     mx_test.15204: mx_test.c:46: add: Assertion
`pthread_mutex_unlock(&(p->l)) == 0' failed.

   DEFAULT      3        8     OK         3.5us
      FAST      3        8     OK         0.5us
 RECURSIVE      3        8     OK         0.6us
ERRORCHECK      3        8     mx_test.15204: mx_test.c:46: add: Assertion
`pthread_mutex_unlock(&(p->l)) == 0' failed.

   DEFAULT      1       16     OK         0.4us
      FAST      1       16     OK         0.4us
 RECURSIVE      1       16     OK         0.4us
ERRORCHECK      1       16     OK         0.5us
   DEFAULT      2       16     OK         0.8us
      FAST      2       16     OK         0.5us
 RECURSIVE      2       16     OK         0.5us
ERRORCHECK      2       16     mx_test.15204: mx_test.c:46: add: Assertion
`pthread_mutex_unlock(&(p->l)) == 0' failed.

   DEFAULT      3       16     OK         1.8us
      FAST      3       16     OK         0.5us
 RECURSIVE      3       16     OK         0.5us
ERRORCHECK      3       16     mx_test.15204: mx_test.c:46: add: Assertion
`pthread_mutex_unlock(&(p->l)) == 0' failed.

   DEFAULT      1      128     OK         0.4us
      FAST      1      128     OK         0.4us
 RECURSIVE      1      128     OK         0.4us
ERRORCHECK      1      128     OK         0.5us
   DEFAULT      2      128     OK         0.6us
      FAST      2      128     OK         0.5us
 RECURSIVE      2      128     OK         0.5us
ERRORCHECK      2      128     mx_test.15204: mx_test.c:46: add: Assertion
`pthread_mutex_unlock(&(p->l)) == 0' failed.

   DEFAULT      3      128     OK         0.6us
      FAST      3      128     OK         0.5us
 RECURSIVE      3      128     OK         0.5us
ERRORCHECK      3      128     mx_test.15204: mx_test.c:46: add: Assertion
`pthread_mutex_unlock(&(p->l)) == 0' failed.

   DEFAULT      1     1024     OK         0.5us
      FAST      1     1024     OK         0.5us
 RECURSIVE      1     1024     OK         0.5us
ERRORCHECK      1     1024     OK         0.5us
   DEFAULT      2     1024     OK         2.3us
      FAST      2     1024     OK         2.2us
 RECURSIVE      2     1024     OK         2.2us
ERRORCHECK      2     1024     mx_test.15204: mx_test.c:46: add: Assertion
`pthread_mutex_unlock(&(p->l)) == 0' failed.

   DEFAULT      3     1024     OK         4.2us
      FAST      3     1024     OK         4.2us
 RECURSIVE      3     1024     OK         3.9us
ERRORCHECK      3     1024     mx_test.15204: mx_test.c:46: add: Assertion
`pthread_mutex_unlock(&(p->l)) == 0' failed.


Results from RH6.2 running on Dual 800MHz PIII CuMine
MUTEX_TYPE N_THREADS     N  Result  Lock+Unlock
==========================  ===================
   DEFAULT      1        1     OK         0.2us
      FAST      1        1     OK         0.2us
 RECURSIVE      1        1     OK         0.2us
ERRORCHECK      1        1     OK         0.3us
   DEFAULT      2        1     OK         7.6us
      FAST      2        1     OK         7.7us
 RECURSIVE      2        1     OK         7.7us
ERRORCHECK      2        1     OK         7.6us
   DEFAULT      3        1     OK         7.7us
      FAST      3        1     OK         7.7us
 RECURSIVE      3        1     OK         7.7us
ERRORCHECK      3        1     OK         7.7us
   DEFAULT      1        2     OK         0.2us
      FAST      1        2     OK         0.2us
 RECURSIVE      1        2     OK         0.2us
ERRORCHECK      1        2     OK         0.3us
   DEFAULT      2        2     OK         3.8us
      FAST      2        2     OK         3.8us
 RECURSIVE      2        2     OK         3.8us
ERRORCHECK      2        2     OK         3.8us
   DEFAULT      3        2     OK         3.4us
      FAST      3        2     OK         3.9us
 RECURSIVE      3        2     OK         4.2us
ERRORCHECK      3        2     OK         4.3us
   DEFAULT      1        4     OK         0.2us
      FAST      1        4     OK         0.2us
 RECURSIVE      1        4     OK         0.2us
ERRORCHECK      1        4     OK         0.3us
   DEFAULT      2        4     OK         3.2us
      FAST      2        4     OK         3.1us
 RECURSIVE      2        4     OK         3.1us
ERRORCHECK      2        4     OK         3.2us
   DEFAULT      3        4     OK         3.2us
      FAST      3        4     OK         3.2us
 RECURSIVE      3        4     OK         3.1us
ERRORCHECK      3        4     OK         3.0us
   DEFAULT      1        8     OK         0.2us
      FAST      1        8     OK         0.2us
 RECURSIVE      1        8     OK         0.2us
ERRORCHECK      1        8     OK         0.2us
   DEFAULT      2        8     OK         1.6us
      FAST      2        8     OK         1.6us
 RECURSIVE      2        8     OK         1.6us
ERRORCHECK      2        8     OK         1.6us
   DEFAULT      3        8     OK         1.9us
      FAST      3        8     OK         1.9us
 RECURSIVE      3        8     OK         1.8us
ERRORCHECK      3        8     OK         1.8us
   DEFAULT      1       16     OK         0.2us
      FAST      1       16     OK         0.2us
 RECURSIVE      1       16     OK         0.2us
ERRORCHECK      1       16     OK         0.2us
   DEFAULT      2       16     OK         0.6us
      FAST      2       16     OK         0.6us
 RECURSIVE      2       16     OK         0.6us
ERRORCHECK      2       16     OK         0.6us
   DEFAULT      3       16     OK         1.0us
      FAST      3       16     OK         1.0us
 RECURSIVE      3       16     OK         1.0us
ERRORCHECK      3       16     OK         1.0us
   DEFAULT      1      128     OK         0.2us
      FAST      1      128     OK         0.2us
 RECURSIVE      1      128     OK         0.2us
ERRORCHECK      1      128     OK         0.2us
   DEFAULT      2      128     OK         0.3us
      FAST      2      128     OK         0.3us
 RECURSIVE      2      128     OK         0.3us
ERRORCHECK      2      128     OK         0.3us
   DEFAULT      3      128     OK         0.3us
      FAST      3      128     OK         0.3us
 RECURSIVE      3      128     OK         0.3us
ERRORCHECK      3      128     OK         0.3us
   DEFAULT      1     1024     OK         0.2us
      FAST      1     1024     OK         0.2us
 RECURSIVE      1     1024     OK         0.2us
ERRORCHECK      1     1024     OK         0.2us
   DEFAULT      2     1024     OK         0.3us
      FAST      2     1024     OK         0.3us
 RECURSIVE      2     1024     OK         0.3us
ERRORCHECK      2     1024     OK         0.3us
   DEFAULT      3     1024     OK         0.3us
      FAST      3     1024     OK         0.3us
 RECURSIVE      3     1024     OK         0.3us
ERRORCHECK      3     1024     OK         0.3us



Note You need to log in before you can comment on or make changes to this bug.