[linux-lvm] Should I expect snapshot origin LV's to be 10x slower?

Kottaridis, Chris chris.kottaridis at windriver.com
Tue May 15 19:32:48 UTC 2007


 
>by putting it into separate device, you might see a 5x slow instead of
10x. still because
>disk seek activity 
>
>do not use current snapshot on a write intense lv.
>and adjust your lv chunk size base on your application workload can
remedy it a bit.

I  am a little curious about the warning here to not use current
snapshot on a write intensive logical volume. Are you saying that if the
original volume that you are taking a snapshot of has intensive writes
that there are problems in snapshotting it ?

I have a situation where I have lots of writes going on to a lv and then
I want to make a snapshot and back it up during the storm of writes. At
the moment I seem to be able to make the snapshot and then mount it and
tar off of it what I want. I unmount it, but the the lvremove hangs. I'm
still testing things to try and narrow things down. But, are there known
issues with snapshots of volumes that are experiencing high rate of
writes ?

Thanks

Chris Kottaridis
Senior Engineer
Wind River Systems
719-522-9786

-----Original Message-----
From: linux-lvm-bounces at redhat.com [mailto:linux-lvm-bounces at redhat.com]
On Behalf Of Ming Zhang
Sent: Thursday, May 10, 2007 12:36 PM
To: LVM general discussion and development
Subject: Re: [linux-lvm] Should I expect snapshot origin LV's to be 10x
slower?

On Thu, 2007-05-10 at 10:33 -0400, Greg Freemyer wrote:
> On 5/10/07, Alex Owen <r.alex.owen at gmail.com> wrote:
> > Hello,
> > I have just been making some snapshot performance benchmarks on a 
> > Debian Etch system.
> > Kernel:  2.6.18-4-686 (2.6.18.dfsg.1-12etch1)
> > dmsetup: 1.02.08-1
> > lvm2: 2.02.06-4
> >
> > I have been using commands of the form:
> >   time dd if=/dev/zero of=/dev/volgroup/test bs=1M count=100 to get 
> > speeds for copying to a LVM device both WITH and WITHOUT a single 
> > snapshot.
> >
> > It seems that writes take >=10 times longer the first time a newly 
> > snapshot origin device is written to.
> >
> > I was expecting somthing like a 2x or 3x performance loss as 1 
> > physical read and 2 physical writes must occur for a single logical 
> > write. I was NOT expecting there to be a 10x overhead. As I move to 
> > larger devices (bs=1M count=1000) the 10x figure rises to nearer
20x.
> > This is also true on mounted origin LV's.
> >
> > Has anyone else benchmarked this? Is this normal?
> >
> > Thanks for any feedback
> > Alex Owen
> 
> I always ensure my snapshots are on physically separate drives than my

> origin.  If they are on the same drive I'm not surprised you're having

> speed issues.  You are significantly increasing the amount of disk 
> seek activity.  Having in separate drives should be much better.

by putting it into separate device, you might see a 5x slow instead of
10x. still because disk seek activity 

do not use current snapshot on a write intense lv. and adjust your lv
chunk size base on your application workload can remedy it a bit.



> 
> (FYI: It has been a while since I benchmarked, so you may still have 
> problems.)
> 
> Greg

_______________________________________________
linux-lvm mailing list
linux-lvm at redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




More information about the linux-lvm mailing list