[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [dm-devel] Is thin provisioning still experimental?

Thank you for that insight, I appreciate it.

Can you elaborate on your concern related to running out of space?

Assuming the metadata device always has space, is it unsafe to let the the data device for the thin pool run out of space if queue_if_no_space is set? Would it not be safe on the block IO level?

No idea what, if any, filesystem you intend to run but XFS has the ability to discontinue retries for certain error conditions.  I highly recommend you enable that for ENOSPC, otherwise you _will_ see the kernel block indefinitely in XFS if thinp runs out of space underneath XFS.

My use case involves doing block level IO for virtual machines with KVM... so there's virtualization in between the thin device and whatever the VM is doing. In that scenario, I would expect the IO requests to just hang until the thin pool is provided more space for the data device and queued IO from the VM would just wait for that. Maybe what you're describing with XFS wouldn't be an issue on that setup, since XFS would be running within the VM.

On Mon, Jul 23, 2018 at 7:07 AM, Mike Snitzer <snitzer redhat com> wrote:
On Mon, Jul 23 2018 at 10:00am -0400,
Mike Snitzer <snitzer redhat com> wrote:

> On Mon, Jul 23 2018 at  1:06am -0400,
> Drew Hastings <dhastings crucialwebhost com> wrote:
> >    I love all of the work you guys do @dm-devel . Thanks for taking the time
> >    to read this.
> >    I would like to use thin provisioning targets in production, but it's hard
> >    to ignore the warning in the documentation. It seems like, with an
> >    understanding of how thin provisioning works, it should be safe to use.
> It is stale.  I just committed this update that'll go upstream for the
> 4.19 merge window, see:
> https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=dm-4.19&id=f88a3f746ff0047c92e8312646247b08264daf35
> >    If the metadata and data device for the thin pool have enough space and
> >    are both error free, the kernel has plenty of free RAM, block sizes are
> >    set large enough to never run into performance issues (64 MiB), all of the
> >    underlying hardware is redundant on high performance NVME (no worries of
> >    fragmentation of data volume)... is it still unsafe for production? If so,
> >    can you shed some light on why that is?
> It is safe.  You do just want to make sure to not run out of space.  We
> now handle that event favorably but it is best to tempt fate.

I meant: "... but it is best to _not_ tempt fate."

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]