[libvirt] [PATCH 1/8] snapshots: Avoid term 'checkpoint' for full system snapshot

Eric Blake eblake at redhat.com
Thu Jul 26 21:38:20 UTC 2018


On 06/26/2018 08:27 PM, Eric Blake wrote:
>>
>> Lets try to see an example:
>>
>> T1
>> - user create new vm marked for incremental backup
>> - system create base volume (S1)
>> - system create new dirty bitmap (B1)
> 
> Why do you need a dirty bitmap on a brand new system?  By definition, if 
> the VM is brand new, every sector that the guest touches will be part of 
> the first incremental backup, which is no different than taking a full 
> backup of every sector?  But if it makes life easier by following 
> consistent patterns, I also don't see a problem with creating a first 
> checkpoint at the time an image is first created (my API proposal would 
> allow you to create a domain, start it in the paused state, create a 
> checkpoint, and then resume the guest so that it can start executing).
> 
>>
>> T2
>> - user create a snapshot
>> - dirty bitmap in original snapshot deactivated (B1)
>> - system create new snapshot (S2)
>> - system starts new dirty bitmap in the new snapshot (B2)
> 
> I'm still worried that interactions between snapshots (where the backing 
> chain grows) and bitmaps may present interesting challenges.  But what 
> you are describing here is that the act of creating a snapshot (to 
> enlarge the backing chain) also has the effect of creating a snapshot (a 

that should read "also has the effect of creating a checkpoint"

Except that I'm not quite sure how best to handle the interaction 
between snapshots and checkpoints using existing qemu primitives.  Right 
now, I'm leaning back to the idea that if you have an external backing 
file (that is, the act of creating a snapshot expanded the disk chain 
from 'S1' into 'S1 <- S2'), then creating an incremental backup that 
covers just the disk changes since that point in time is the same as a 
"sync":"top" copy of the just-created S2 image (no bitmap is needed to 
track what needs copying) - which works well for qemu writing out the 
backup file.  But since we are talking about allowing third-party 
backups (where we provide an NBD export and the client can query which 
portions are dirty), then using the snapshot as the start point in time 
would indeed require that we either have a bitmap to expose (that is, we 
need to create a bitmap as part of the same transaction as creating the 
external snapshot file), or that we can resynthesize a bitmap based on 
the clusters allocated in S2 at the time we start the backup operation 
(that's an operation that I don't see in qemu right now).  And if we DO 
want to allow external snapshots to automatically behave as checkpoints 
for use by incremental backups, that makes me wonder if I need to 
eventually enhance the existing virDomainSnapshotCreateXML() to also 
accept XML describing a checkpoint to be created simultaneously with the 
snapshot (the way my proposal already allows creating a checkpoint 
simultaneously with virDomainBackupBegin()).

Another point that John and I discussed on IRC is that migrating bitmaps 
still has some design work to figure out. Remember, right now, there are 
basically three modes of operation regarding storage between source and 
endpoint of a migration:
1. Storage is shared. As long as qemu flushes the bitmap before 
inactivating on the source, then activating on the destination can load 
the bitmap, and everything is fine. The migration stream does not have 
to include the bitmaps.
2. Storage is not shared, but the storage is migrated via flags to the 
migrate command (we're trying to move away from this version) - there, 
qemu knows that it has to migrate the bitmaps as part of the migration 
stream.
3. Storage is not shared, and the storage is migrated via NBD (libvirt 
favors using this version for non-shared storage). Libvirt starts 'qemu 
-S' on the destination, pre-creates a destination file large enough to 
match the source, starts an NBD server at the destination, then on the 
source starts a block-mirror operation to the destination. When the 
drive is mirrored, libvirt then kicks off the migration using the same 
command as in style 1; when all state is transferred, the source then 
stops the block-mirror, disconnects the NBD client, the destination then 
stops the NBD server, and the destination can finally start executing. 
But note that in this mode, no bitmaps are migrated. So we need some way 
for libvirt to also migrate bitmap state to the destination (perhaps 
having the NBD server open multiple exports - one for the block itself, 
but another export for each bitmap that needs to be copied).

At this point, I think the pressure is on me to provide a working demo 
of incremental backups working without any external snapshots or 
migration, before we expand into figuring out interactions between features.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org




More information about the libvir-list mailing list