[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: condvar performance in .59 vs .60



> > 3 times.
>
> This is an extreme case and probably not a very realistic pattern.  The
> worst slowdown I've seen was, I think, 5%.  Yes, some things got slower
> because the kernel is now, on SMP machines, too efficient and releases
> the waiter so fast, that it runs into the blocked internal mutex.
> Resulting in more context switches etc.

Even on UP, the thread that is calling futex_wake() is preempted by the
sleeped thread.
I tried to play with nice() or other scheduling settings to no avail.

Thread 1                          Thread 2  (blocked on condvar futex)
---------------------------------------------------------------------
lock mutex
calls  futex_wake(&condvar futex)
                                          preempt thread 1, returns to user
space
                                          find the condvar mutex hold :
Recall the kernel with futex_wait(&condvar mutex)
unlock mutex
call futex_wake(&condvar mutex)
                                         preempt thread1, returns to user
space


Could be cool to have a futex_wake() that doesnt *always* preempt the
calling thread.


>
> The method used until 0.59 wasn't reliable.  It should in theory work,
> but in some extreme cases, under very high load, it locked up.  I think
> if you'd run more threads and on a >= 4p machine you'd see lockups, too.
>
> I'll spend more time on the requeue problem later, when I have a bit
> more time.  It's more important to have the implementation correct now.
>  And as I said, the performance degregation is very much lower in the
> real world applications I've seen.

Agreed. Real world applications dont use condvar_broadcast() with tens
threads blocked on the condvar.





[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]