[dm-devel] [PATCH v1 0/5] dm: dm-user: New target that proxies BIOs to userspace

Akira Hayakawa ruby.wktk at gmail.com
Tue Dec 29 12:52:24 UTC 2020


Since this is an interesting topic to me. As a holiday project, I started
my new kernel module.

There was a similar project named ABUSE, back in 2009. Unfortunately, it
was rejected.
https://lwn.net/Articles/343514/

The kernel module’s mechanism is very simple:
1. When client code sends a request to /dev/abuseX it is pushed to the
internal queue.
2. When some requests are queued, the userland storage engine is woken up
(usually it waits by epoll or some variant) it pulls one request from the
queue via ioctl (ABUSE_GET_REQ).
3. After processing, it pushes back a completion via ioctl (ABUSE_PUT_REQ).

The original code is bio-based but naota modified it to request-based in
2015.
https://github.com/naota/abuse-kmod

This is my starting point.

The problem I see is, the 2015 version copies bvec pages across the
user-kernel space boundary. I think this is no good because we need to copy
2x the amount of the IO buffers.

The solution I am trying to do is mmap the bvec pages in kernel-space from
the userland. Instead of copying the bvec pages, the kernel passes only the
physical page address to the userland and then the userland engine mmap the
page to some address space in user space. I know mmap has some overhead
because it is system call but the rationale behind this is “mmap is better
than copy”. Since it is not able to access kernel address space directly
from the userland, unlike Vitaly above mentions zero-copy I believe some
trick like mmaping is inevitable. Yes, if there is some non-overhead way to
access bvec pages from userland, I want to know that.

Here is the kernel code. Userland-side is written in Rust. It’s already
passed a test by badblocks.
https://github.com/akiradeveloper/userland-io/blob/20201229/abuse-kmod/src/abuse.c

- Akira

On Thu, Dec 17, 2020 at 5:35 AM Palmer Dabbelt <palmer at dabbelt.com> wrote:

> On Tue, 15 Dec 2020 22:17:06 PST (-0800), ruby.wktk at gmail.com wrote:
> > Hi my name is Akira Hayakawa. I am maintaining an out-of-tree DM target
> > named dm-writeboost.
> >
> > Sorry to step in. But this is a very interesting topic at least to me.
> >
> > I have been looking for something like dm-user because I believe we
> should
> > be able to implement virtual block devices in Rust language.
> >
> > I know proxying IO requests to userland always causes some overhead but
> for
> > some type of device that performance doesn't matter or some research
> > prototyping or pseudo device for testing, this way should be developed.
> Of
> > course, implementation in Rust will give us opportunities to develop more
> > complicated software in high quality.
> >
> > I noticed this thread few days ago then I started to prototype this
> library
> > https://github.com/akiradeveloper/userland-io
> >
> > It is what I want but the transport is still NBD which I don't like so
> > much. If dm-user is available, I will implement a transport using
> dm-user.
>
> Great, I'm glad to hear that.  Obviously this is still in the early days
> and
> we're talking about high-level ABI design here, so things are almost
> certainly
> going to change, but it's always good to have people pushing on stuff.
>
> Just be warned: we've only had two people write userspaces for this (one of
> which was me, and all that is test code) so I'd be shocked if you manage to
> avoid running into bugs.
>
> >
> > - Akira
> >
> > On Tue, Dec 15, 2020 at 7:00 PM Palmer Dabbelt <palmer at dabbelt.com>
> wrote:
> >
> >> On Thu, 10 Dec 2020 09:03:21 PST (-0800), josef at toxicpanda.com wrote:
> >> > On 12/9/20 10:38 PM, Bart Van Assche wrote:
> >> >> On 12/7/20 10:55 AM, Palmer Dabbelt wrote:
> >> >>> All in all, I've found it a bit hard to figure out what sort of
> >> interest
> >> >>> people
> >> >>> have in dm-user: when I bring this up I seem to run into people
> who've
> >> done
> >> >>> similar things before and are vaguely interested, but certainly
> nobody
> >> is
> >> >>> chomping at the bit.  I'm sending it out in this early state to try
> and
> >> >>> figure
> >> >>> out if it's interesting enough to keep going.
> >> >>
> >> >> Cc-ing Josef and Mike since their nbd contributions make me wonder
> >> >> whether this new driver could be useful to their use cases?
> >> >>
> >> >
> >> > Sorry gmail+imap sucks and I can't get my email client to get at the
> >> original
> >> > thread.  However here is my take.
> >>
> >> and I guess I then have to apoligize for missing your email ;).
> Hopefully
> >> that
> >> was the problem, but who knows.
> >>
> >> > 1) The advantages of using dm-user of NBD that you listed aren't
> actually
> >> > problems for NBD.  We have NBD working in production where you can
> hand
> >> off the
> >> > sockets for the server without ending in timeouts, it was actually the
> >> main
> >> > reason we wrote our own server so we could use the FD transfer stuff
> to
> >> restart
> >> > the server without impacting any clients that had the device in use.
> >>
> >> OK.  So you just send the FD around using one of the standard
> mechanisms to
> >> orchestrate the handoff?  I guess that might work for our use case,
> >> assuming
> >> whatever the security side of things was doing was OK with the old FD.
> >> TBH I'm
> >> not sure how all that works and while we thought about doing that sort
> of
> >> transfer scheme we decided to just open it again -- not sure how far we
> >> were
> >> down the dm-user rabbit hole at that point, though, as this sort of
> arose
> >> out
> >> of some other ideas.
> >>
> >> > 2) The extra copy is a big deal, in fact we already have too many
> copies
> >> in our
> >> > existing NBD setup and are actively looking for ways to avoid those.
> >> >
> >> > Don't take this as I don't think dm-user is a good idea, but I think
> at
> >> the very
> >> > least it should start with the very best we have to offer, starting
> with
> >> as few
> >> > copies as possible.
> >>
> >> I was really experting someone to say that.  It does seem kind of silly
> to
> >> build
> >> out the new interface, but not go all the way to a ring buffer.  We just
> >> didn't
> >> really have any way to justify the extra complexity as our use cases
> aren't
> >> that high performance.   I kind of like to have benchmarks for this
> sort of
> >> thing, though, and I didn't have anyone who had bothered avoiding the
> last
> >> copy
> >> to compare against.
> >>
> >> > If you are using it currently in production then cool, there's
> clearly a
> >> usecase
> >> > for it.  Personally as I get older and grouchier I want less things in
> >> the
> >> > kernel, so if this enables us to eventually do everything NBD related
> in
> >> > userspace with no performance drop then I'd be down.  I don't think
> you
> >> need to
> >> > make that your primary goal, but at least polishing this up so it
> could
> >> > potentially be abused in the future would make it more compelling for
> >> merging.
> >> > Thanks,
> >>
> >> Ya, it's in Android already and we'll be shipping it as part of the new
> OTA
> >> flow for the next release.  The rules on deprecation are a bit different
> >> over
> >> there, though, so it's not like we're wed to it.  The whole point of
> >> bringing
> >> this up here was to try and get something usable by everyone, and while
> I'd
> >> eventually like to get whatever's in Android into the kernel proper we'd
> >> really
> >> planned on supporting an extra Android-only ABI for a cycle at least.
> >>
> >> I'm kind of inclined to take a crack at the extra copy, to at least see
> if
> >> building something that eliminates it is viable.  I'm not really sure if
> >> it is
> >> (or at least, if it'll net us a meaningful amount of performance), but
> >> it'd at
> >> least be interesting to try.
> >>
> >> It'd be nice to have some benchmark target, though, as otherwise this
> stuff
> >> hangs on forever.  My workloads are in selftests later on in the patch
> >> set, but
> >> I'm essentially using tmpfs as a baseline to compare against
> ext4+dm-user
> >> with
> >> some FIO examples as workloads.  Our early benchmark numbers indicated
> >> this was
> >> way faster than we needed, so I didn't even bother putting together a
> >> proper
> >> system to run on so I don't really have any meaningful numbers there.
> Is
> >> there
> >> an NBD server that's fast that I should be comparing against?
> >>
> >> I haven't gotten a whole lot of feedback, so I'm inclined to at least
> have
> >> some
> >> reasonable performance numbers before bothering with a v2.
> >>
> >> --
> >> dm-devel mailing list
> >> dm-devel at redhat.com
> >> https://www.redhat.com/mailman/listinfo/dm-devel
>


-- 
Akira Hayakawa
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/dm-devel/attachments/20201229/c497bd1b/attachment.htm>


More information about the dm-devel mailing list