[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Thread starvation with mutex

Perez-Gonzalez, Inaky wrote:

From: Gary S. Robertson
Ulrich Drepper wrote:

Luke Elliott wrote:

Is there a reason why NPTL does not use this
"fair" method?

It's slow and unnecessary.

Surely this is pretty normal, expected behaviour of a mutex?

Perhaps expected by you. Anybody with thread experience wouldn't expect it.

scheduler's dynamic priority manipulation notwithstanding, I would
indeed expect that a FIFO-queued mutex would allow Mr. Elliot's 'thread
2' to acquire the mutex once it was released... the majority of the
threaded environments with which I have worked would in fact guarantee
that, given two threads of equal priority.

That's something that, TTBOMK, is expected in real-time/embedded
systems, but as Ulrich mentions, it is also slow (as it causes the
convoy phenomenon). Our guess is that the best solution would be to have an implementation that can use both, depending on the application).

System/library function calls should be engineered to reliably perform the operations their names and documentation suggest will occur when they are invoked - otherwise they should return with an error. In this case, if the programmer calls pthread_mutex_unlock, then most likely the intent is to yield the mutex to the next waiting pthread, if any. It is reasonable for the programmer to expect that this will happen each time the function is called... after all, it isn't called 'pthread_mutex_unlock_unless_it_takes_too_long'. This is a SUPPORT library... if it requires inside knowledge of kernel internals and architectural quirks in order to predict whether or not it will perform as advertised in the man pages then it fails the test of usability which should be the foremost engineering criterion for such a product. First make sure the code performs as advertised every time... no surprises, no 'only if it's Tuesday and your're running a P4 in excess of 2 GHZ and it's not SMP'. Then see how fast you can make it.

And now, risking Ulrich's second dismissive commentary this month, some non-solicited publicity:

That's more or less what we are trying to do with the RTNPTL patch;
we use some twisted evolution of the futex idea to implement a
mutex primitive that can work either enforcing strict ownership
transferal or the quicker version. If you are interested in that
feel free to grep around the mailing list archives, or click to
http://developer.osdl.org/dev/robustmutexes; in the kernel patch
we try to explain all these issues with some detail. I'll be happy
to provide more info if wished.

(btw: the current RTNPTL patch still is not able to switch around
modes, but Boris is busy working on it).

Iñaky Pérez-González -- Not speaking for Intel -- all opinions are my own (and my fault)

Thanks for the RNPTL pointer... at the moment I have delivery deadlines which prevent me from diving into the details of either nptl or 2.6 scheduler code. Likewise, for the time being I am forced to use the linuxthreads library due to lack of support for real-time scheduling in nptl... my designs rely on supervisory layers and service layers to support the application logic, and these refinements are not possible with a flat priority structure. I expect later in the year that the nptl issues may bubble to the top of my priority list, and I'll be looking more intently at the options and alternatives then.

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]