[dm-devel] [git pull] device mapper changes for 4.18

Mikulas Patocka mpatocka at redhat.com
Mon Jun 4 21:53:27 UTC 2018



On Mon, 4 Jun 2018, Linus Torvalds wrote:

> On Mon, Jun 4, 2018 at 2:13 PM Mike Snitzer <snitzer at redhat.com> wrote:
> >
> > (Mikulas would like to still use swait for the dm-writecache's endio
> > thread, since endio_thread_wait only has a single waiter.
> 
> If you already know it has a single waiter, please don't use a queue at all.
> 
> Just have the "struct task_struct *" in the waiter field, and use
> "wake_up_process()". Use NULL for "no process".

I'd be interested - does the kernel deal properly with spurious wake-up? - 
i.e. suppose that the kernel thread that I created is doing simething else 
in a completely different subsystem - can I call wake_up_process on it? 
Could it confuse some unrelated code?

The commonly used synchronization primitives recheck the condition after 
wake-up, but it's hard to verify that the whole kernel does it.

> That's *much* simpler than swait(), and is a well-accepted traditional
> wake interface. It's also really really obvious.
> 
> The "there is only a single waiter" is *NOT* an excuse for using
> swait. Quite the reverse. Using swait is stupid, slow, and complex.
> And it generates code that is harder to understand.

It looked to me like the standard wait-queues suffers from feature creep 
(three flags, high number of functions abd macros, it even uses an 
indirect call to wake something up) - that's why I used swait.

> And yes, the fact that KVM also made that completely idiotic choice in
> their apic code is not an excuse either. I have no idea why they did
> it either. It's stupid. In the kvm case, I think what happened was
> that they had a historical wait-queue model, and they just didn't
> realize that they al;ways had just one waiter, so then they converted
> a waitqueue to a swait queue.
> 
> But if you already know there is only ever one waiter, please don't do
> that. Just avoid queues entirely.
> 
>                   Linus

Mikulas




More information about the dm-devel mailing list