[Libguestfs] Possible to speed up guestmount?

Kashyap Chamarthy kchamart at redhat.com
Fri Feb 7 06:14:54 UTC 2014


On Thu, Feb 06, 2014 at 05:02:48PM +0000, Richard W.M. Jones wrote:
> On Thu, Feb 06, 2014 at 02:53:05AM +0000, Patrick Schleizer wrote:
> > Hi,
> > 
> > Apparently,
> > 
> >     guestmount -o allow_other -a "/path/to/raw_file" -m /dev/sda1
> > "/path/to/mountfolder"
> > 
> > is much slower than
> > 
> >     kpartx -av "/path/to/raw_file"
> >     mount /dev/mapper/loop0p1 /path/to/mountfolder
> > 
> > (Doing lots of read/write inside the image.)
> 
> For general performance tips, see this page (I think you've seen it
> already):
> 
> http://libguestfs.org/guestfs-performance.1.html
> 
> > I thought guestmount "only" scripts the above. Seems I was wrong on that.
> 
> Guestmount provides a FUSE interface to the libguestfs API.
> 
> > I am currently using libguestfs 1.18.1-1 (because it comes with Debian
> > wheezy/stable) and read the FAQ [1] [2], but still have questions.
> > 
> > Seems my version is higher than 1.13.16, so far so good. I am using
> > guestmount inside a virtual machine (to prevent damaging my hosts due to
> > own stuff).
> 
> However yes the real problem here as you've diagnosed is that you're
> using TCG (software emulation) instead of baremetal hardware
> virtualization.
> 
> Likely there are two (or three) things you can do:
> 
> (1) Use the libguestfs API directly instead of FUSE.  (eg guestfish or
> a language binding like Sys::Guestfs).  This cuts out all the FUSE
> layers, and should be quite a lot faster.
> 
> (2) Use UML instead of qemu.  This requires you to update your version
> of libguestfs to something more recent (1.24 ideally), and follow the
> instructions here:
> 
>   http://libguestfs.org/guestfs.3.html#user-mode-linux-backend
> 
> UML might be "uncool", but the UML backend is fully tested for each
> release and supported by us.  UML has the advantage that its
> performance is reasonably consistent between baremetal and under
> virtualization.
> 
> (3) Run libguestfs on baremetal (!)
> 
> I'd love to say that you could use nested virtualization to get
> baremetal-like virt performance in a guest, but unfortunately it
> doesn't work well -- see recent discussion on this list.

Rich, but we can still go ahead and suggest to test nested virt (bonus,
if they could test Intel) and report findings if they have time :-)

FWIW, I'm trying to move all of my development environment to nested
virt (Intel) to make it part of my work-flow hoping to hit more corner
cases.

Thanks for your detailed (as usual) comments here, I learnt a little bit
more about UML.

-- 
/kashyap




More information about the Libguestfs mailing list