[dm-devel] patch to dm-emc.c

Mike Christie michaelc at cs.wisc.edu
Thu Aug 10 21:27:39 UTC 2006


Mike Christie wrote:
> egoggin at emc.com wrote:
>> Thanks Mike.  Changes to reflect these and other comments from
>> Alasdair and Mike Anderson will be delayed since I'm on vacation
>> for a week and a half or so.
>>
> 
> no problem.
> 
>>>> +	struct request req;
>>> You should not have to do this should you? The queue has a mempool of
>>> requests. The only way it could be exhausted is if some app is hogging
>>> them by doing SG_IO (since dm bd_claims the device), you do not use
>>> GFP_WAIT in your allocation, and you really hit some nasty case where
>>> your process is starved because you keep missing the chance 
>>> to allocate
>>> a request that is freed and put back in the pool. If you are that
>>> unlucky and that happens though, you would have problems elsewhere
>>> though so maybe a generic solution to make sure no one gets starved
>>> would be better.
>> I think the use case is not that there are no more request structures
>> to allocate but that enough write requests (think buf sync/flush of
>> the mapped devicce) have piled up on the target device's queue waiting
>> for a pg_init that the queue's write request threshold gets met.  Any
>> further attempt to allocate a request (like one to service the pg_init)
> 
> I think I am missing what a target device queue is. When I wrote my
> comment I was thinking about the kernel code where there is a per device
> request_queue which allocates a request from a mempool and has a write
> threshold. So if for some reason, userspace were to do something silly
> like send down write_threshold+1 requests and so anyone else trying to
> allocate a request would be put to sleep until a request is completed I
> do not see how a request is not eventually completed so we can
> eventually allocate a request from the mempool.

That is I am thinking you are thinking that we block in that request
allocation code which checks the threshold. So I thought you are saying
we hit this bit in get_request_wait:

        if (rl->count[rw]+1 >= queue_congestion_on_threshold(q)) {
                if (rl->count[rw]+1 >= q->nr_requests) {
                        ioc = current_io_context(GFP_ATOMIC);
                        /*
                         * The queue will fill after this allocation, so set
                         * it as full, and mark this process as "batching".
                         * This process will be allowed to complete a
batch of
                         * requests, others will be blocked.
                         */
                        if (!blk_queue_full(q, rw)) {
                                ioc_set_batching(q, ioc);
                                blk_set_queue_full(q, rw);
                        } else {
                                if (may_queue != ELV_MQUEUE_MUST
                                                && !ioc_batching(q, ioc)) {
                                        /*
                                         * The queue is full and the
allocating
                                         * process is not a "batcher",
and not
                                         * exempted by the IO scheduler
                                         */
                                        goto out;
                                }
                        }
                }
                set_queue_congested(q, rw);
        }




But I thought, a request should eventually complete and so we should be
woken up from get_request_wait, and we will try allocate a request again
and then the write request count should have gone down and we should be
able to proceed.




More information about the dm-devel mailing list