[dm-devel] [RFC, PATCH] remove signal handling from dm-io.c, sync_io()
Kevin Corry
kevcorry at us.ibm.com
Fri Jun 11 14:09:11 UTC 2004
On Friday 11 June 2004 11:48 am, Kevin Corry wrote:
> Upon further review of dm-io.c::sync_io(), I have a couple more comments.
>
> First, I'm not certain whether the signal handling needs to be there or
> not. Joe, perhaps you can comment on that. If we don't need it, then we can
> use Dave's suggestion and just get rid of the signal_pending() call. If we
> do need it, then instead of declaring a struct io on the stack in
> sync_io(), we need to allocate one (async_io() already does this using a
> mempool) and use dec_count() to deallocate it. This ought to prevent the
> stack corruption Dave described. I'll put together a patch in a bit to
> demonstrate how this would work.
After talking to Alasdair a bit, it seems like there still might be a reason
for using the interruptible wait in sync_io(). The point he brought up was
that if you are using dm-io to do I/O to another DM device, and that DM device
is suspended, then the I/O may never complete if the device doesn't get
unsuspended (for some reason). In this case, you'd like to be able to
interrupt the process that's waiting for that I/O so it isn't stuck in D-state
forever.
So this means we need to allocate a struct io from the mempool in sync_io()
instead of declaring it on the stack. Here's a patch that should accomplish
this. It ain't pretty, but I think it gets the job done. It requires some
changes in dec_count(), and I've also replaced the io_schedule() loop with a
call to wait_event_interruptible(). Now everyone take a look and tell me how
ugly this is. :) And if anyone can think of something more elegant, please
let me know.
--
Kevin Corry
kevcorry at us.ibm.com
http://evms.sourceforge.net/
--- diff/drivers/md/dm-io.c 2004-06-11 12:16:24.729145072 +0000
+++ source/drivers/md/dm-io.c 2004-06-11 13:56:02.626366560 +0000
@@ -317,17 +317,19 @@
set_bit(region, &io->error);
if (atomic_dec_and_test(&io->count)) {
- if (io->sleeper)
- wake_up_process(io->sleeper);
+ int r = io->error;
+ io_notify_fn fn = io->callback;
+ void *context = io->context;
- else {
- int r = io->error;
- io_notify_fn fn = io->callback;
- void *context = io->context;
+ mempool_free(io, _io_pool);
- mempool_free(io, _io_pool);
+ if (fn) {
fn(r, context);
}
+ } else {
+ struct task_struct *sleeper = io->sleeper;
+ if (sleeper)
+ wake_up_process(sleeper);
}
}
@@ -535,33 +537,34 @@
static int sync_io(unsigned int num_regions, struct io_region *where,
int rw, struct dpages *dp, unsigned long *error_bits)
{
- struct io io;
+ DECLARE_WAIT_QUEUE_HEAD(wq);
+ struct io *io;
+ int rc;
if (num_regions > 1 && rw != WRITE) {
return -EIO;
}
- io.error = 0;
- atomic_set(&io.count, 1); /* see dispatch_io() */
- io.sleeper = current;
-
- dispatch_io(rw, num_regions, where, dp, &io, 1);
-
- while (1) {
- set_current_state(TASK_UNINTERRUPTIBLE);
-
- if (!atomic_read(&io.count) || signal_pending(current))
- break;
-
- io_schedule();
+ io = mempool_alloc(_io_pool, GFP_NOIO);
+ memset(io, 0, sizeof(*io));
+ atomic_set(&io->count, 2); /* One for us, one for dispatch_io(). */
+ io->sleeper = current;
+
+ dispatch_io(rw, num_regions, where, dp, io, 1);
+
+ wait_event_interruptible(wq, (atomic_read(&io->count) == 1));
+
+ if (atomic_read(&io->count) > 1) {
+ rc = -EINTR;
+ } else {
+ *error_bits = io->error;
+ rc = io->error ? -EIO : 0;
}
- set_current_state(TASK_RUNNING);
- if (atomic_read(&io.count))
- return -EINTR;
+ io->sleeper = NULL;
+ dec_count(io, 0, rc);
- *error_bits = io.error;
- return io.error ? -EIO : 0;
+ return rc;
}
static int async_io(unsigned int num_regions, struct io_region *where, int rw,
More information about the dm-devel
mailing list