Hibernate with LVM Swap

Peter Jones pjones at redhat.com
Tue Jun 13 21:23:30 UTC 2006


On Tue, 2006-06-13 at 13:36 -0600, Lamont R. Peterson wrote:

> > Moreover, there isn't a compelling reason *not* to do it.  This
> > hypothetical performance-while-swapping argument you've got just isn't
> > reality.  You're not CPU bound when you're swapping.  You're I/O
> > limited, and most of the limit isn't time on the host bus, it's disk
> > seeks.  I really doubt if LVM will make any significant difference --
> > measurable difference, that is, much less noticeable by a human -- at
> > all.
> 
> Of course.  Yes, swapping is not memory or CPU bound, it is I/O bound, as you 
> state.  However, I wasn't talking about LVM code overhead, I was talking 
> about drive seek overhead in heavy swapping situations.
> 
> If you are doing heavy swapping with a swap partition, the on disk storage is 
> contiguous and, therefore, you will have less distance to travel when 
> seeking.

That's just as true as on LVM.  If I build a LV to put swap on that's
comprised of several PVs which aren't contiguous, then sure, it's not
contiguous.  At the same time, if I make 7 partitions that are 100M each
and activate them all, I've got 7 discontiguous areas.

In both cases it's "if I set my machine up stupidly, swaping sucks even
more than normal".

> On LVM, the PEs could be spread around the disk more widely (i.e. 
> non-contiguous), so the heads will have farther to go.  If you're *really* 
> lucky, your heavy swapping pattern will let you alternate (or rotate, if that 
> word fits better) around each of the disks with PEs backing your swap LVs 
> LEs. But, the likelyhood of that working out is very small.

It's about as small as actually getting into this contrived situation in
the first place...

-- 
  Peter




More information about the fedora-devel-list mailing list