[compiler-rt][builtins] Switch libatomic locks to pthread_mutex_t (#94374)

When an uninstrumented libatomic is used with a TSan instrumented
memcpy, TSan may report a data race in circumstances where writes are
arguably safe.

This occurs because __atomic_compare_exchange won't be instrumented in
an uninstrumented libatomic, so TSan doesn't know that the subsequent
memcpy is race-free.

On the other hand, pthread_mutex_(un)lock will be intercepted by TSan,
meaning an uninstrumented libatomic will not report this false-positive.

pthread_mutexes also may try a number of different strategies to acquire
the lock, which may bound the amount of time a thread has to wait for a
lock during contention.

While pthread_mutex_lock has a larger overhead (due to the function
call and some dispatching), a dispatch to libatomic already predicates
a lack of performance guarantees.
This commit is contained in:
Logikable
2024-06-04 11:19:34 -07:00
committed by GitHub
parent c1654c38e8
commit b62b7a42bb

View File

@@ -94,19 +94,12 @@ static Lock locks[SPINLOCK_COUNT]; // initialized to OS_SPINLOCK_INIT which is 0
#else
_Static_assert(__atomic_always_lock_free(sizeof(uintptr_t), 0),
"Implementation assumes lock-free pointer-size cmpxchg");
typedef _Atomic(uintptr_t) Lock;
#include <pthread.h>
typedef pthread_mutex_t Lock;
/// Unlock a lock. This is a release operation.
__inline static void unlock(Lock *l) {
__c11_atomic_store(l, 0, __ATOMIC_RELEASE);
}
/// Locks a lock. In the current implementation, this is potentially
/// unbounded in the contended case.
__inline static void lock(Lock *l) {
uintptr_t old = 0;
while (!__c11_atomic_compare_exchange_weak(l, &old, 1, __ATOMIC_ACQUIRE,
__ATOMIC_RELAXED))
old = 0;
}
__inline static void unlock(Lock *l) { pthread_mutex_unlock(l); }
/// Locks a lock.
__inline static void lock(Lock *l) { pthread_mutex_lock(l); }
/// locks for atomic operations
static Lock locks[SPINLOCK_COUNT];
#endif