[dm-devel] I/O block when removing thin device on the same pool

Dennis Yang dennisyang at qnap.com
Mon Feb 1 03:52:25 UTC 2016


Hi,

2016-01-30 0:05 GMT+08:00 Joe Thornber <thornber at redhat.com>:

> On Fri, Jan 29, 2016 at 07:01:44PM +0800, Dennis Yang wrote:
> > I had tried to define MAX_DECS as 1, 16, and 8192, and here is the
> > throughput I got.
> > When #define MAX_DECS 1, throughput drops from 3.2GB/s to around 800 ~
> 950
> > MB/s.
> > When #define MAX_DECS 16, throughput drops from 3.2GB/s to around 150 ~
> 400
> > MB/s
> > When #define MAX_DECS 8192, the I/O blocks until deletion is done.
> >
> > These throughput is gathered by writing to a newly created thin device
> > which means lots of provisioning take place. So it seems that the more
> fine
> > grained lock we use here results in the higher throughput. Is there any
> > concern if I set MAX_DECS to 1 for production?
>
> Does the time taken to remove the thin device change as you drop it to one?
>
> - Joe
>

Not that I am aware of, but I redo the experiment and the results are
listed below.

#define MAX_DECS 1
Delete a fully-mapped 10TB device without concurrent I/O takes 49 secs.
Delete a fully-mapped 10TB device with concurrent I/O to pool takes 44 secs.

#define MAX_DECS 16
Delete a fully-mapped 10TB device without concurrent I/O takes 47 secs.
Delete a fully-mapped 10TB device with concurrent I/O to pool takes 46 secs.

#define MAX_DECS 8192
Delete a fully-mapped 10TB device without concurrent I/O takes 47 secs.
Delete a fully-mapped 10TB device with concurrent I/O to pool takes 50 secs.

Thanks,
Dennis

-- 
Dennis Yang
QNAP Systems, Inc.
Skype: qnap.dennis.yang
Email: dennisyang at qnap.com
Tel: (+886)-2-2393-5152 ext. 15018
Address: 13F., No.56, Sec. 1, Xinsheng S. Rd., Zhongzheng Dist., Taipei
City, Taiwan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/dm-devel/attachments/20160201/87de5617/attachment.htm>


More information about the dm-devel mailing list