[libvirt] RFC [3/3]: Lock manager usage scenarios

Eric Blake eblake at redhat.com
Fri Sep 10 20:39:41 UTC 2010


On 09/10/2010 10:01 AM, Daniel P. Berrange wrote:
>
> At libvirtd startup:
>
>    driver = virLockManagerPluginLoad("sync-manager");
>
>
> At libvirtd shtudown:
>
>    virLockManagerPluginUnload(driver)

Can you load more than one lock manager at a time, or just one active 
lock manager?  How does a user configure which lock manager(s) to load 
when libvirtd is started?

>
>
> At guest startup:
>
>    manager = virLockManagerNew(driver,
>                                VIR_LOCK_MANAGER_START_DOMAIN,
>                                0);
>    virLockManagerSetParameter(manager, "id", id);
>    virLockManagerSetParameter(manager, "uuid", uuid);
>    virLockManagerSetParameter(manager, "name", name);
>
>    foreach disk
>      virLockManagerRegisterResource(manager,
>                                     VIR_LOCK_MANAGER_RESOURCE_TYPE_DISK,
>                                     disk.path,
>                                     ..flags...);
>
>    char **supervisorargv;
>    int supervisorargc;
>
>    supervisor = virLockManagerGetSupervisorPath(manager);
>    virLockManagerGetSupervisorArgs(&argv,&argc);
>
>    cmd = qemuBuildCommandLine(supervisor, supervisorargv, supervisorargv);
>
>    supervisorpid = virCommandExec(cmd);
>
>    if (!virLockManagerGetChild(manager,&qemupid))
>      kill(supervisorpid); /* XXX or leave it running ??? */

Would it be better to first try virLockManagerShutdown?  And rather than 
a direct kill(), shouldn't this be virLockManagerFree?

>
>
> At guest shutdown:
>
>    ...send QEMU 'quit' monitor command, and/or kill(qemupid)...
>
>    if (!virLockManagerShutdown(manager))
>       kill(supervisorpid); /* XXX or leave it running ??? */
>
>    virLockManagerFree(manager);
>
>
>
> At libvirtd restart with running guests:
>
>    foreach still running guest
>      manager = virLockManagerNew(driver,
>                                  VIR_LOCK_MANAGER_START_DOMAIN,
>                                  VIR_LOCK_MANAGER_NEW_ATTACH);
>      virLockManagerSetParameter(manager, "id", id);
>      virLockManagerSetParameter(manager, "uuid", uuid);
>      virLockManagerSetParameter(manager, "name", name);
>
>      if (!virLockManagerGetChild(manager,&qemupid))
>        kill(supervisorpid); /* XXX or leave it running ??? */
>
>
>
> With disk hotplug:
>
>    if (virLockManagerAcquireResource(manager,
>                                      VIR_LOCK_MANAGER_RESOURCE_TYPE_DISK,
>                                      disk.path
>                                      ..flags..))
>       ...abort hotplug attempt ...
>
>    ...hotplug the device...
>
>
>
> With disk unhotplug:
>
>      ...hotunplug the device...
>
>    if (virLockManagerReleaseResource(manager,
>                                      VIR_LOCK_MANAGER_RESOURCE_TYPE_DISK,
>                                      disk.path
>                                      ..flags..))
>       ...log warning ...
>
>
>
> During migration:
>
>    1. On source host
>
>         if (!virLockManagerPrepareMigrate(manager, hosturi))
>             ..don't start migration..
>
>    2. On dest host
>
>        manager = virLockManagerNew(driver,
>                                    VIR_LOCK_MANAGER_START_DOMAIN,
>                                    VIR_LOCK_MANAGER_NEW_MIGRATE);
>        virLockManagerSetParameter(manager, "id", id);
>        virLockManagerSetParameter(manager, "uuid", uuid);
>        virLockManagerSetParameter(manager, "name", name);
>
>        foreach disk
>          virLockManagerRegisterResource(manager,
>                                         VIR_LOCK_MANAGER_RESOURCE_TYPE_DISK,
>                                         disk.path,
>                                         ..flags...);

So if there needs to be any relaxation of locks from exclusive to 
shared-write for the duration of the migration, that would be the 
responsibility of virLockManagerPrepareMigrate, and not done directly by 
libvirt?

>
>        char **supervisorargv;
>        int supervisorargc;
>
>        supervisor = virLockManagerGetSupervisorPath(manager);
>        virLockManagerGetSupervisorArgs(&argv,&argc);
>
>        cmd = qemuBuildCommandLine(supervisor, supervisorargv, supervisorargv);
>
>        supervisorpid = virCommandExec(cmd);
>
>        if (!virLockManagerGetChild(manager,&qemupid))
>          kill(supervisorpid); /* XXX or leave it running ??? */
>
>    3. Initiate migration in QEMU on source and wait for completion
>
>    4a. On failure
>
>        4a1 On target
>
>              virLockManagerCompleteMigrateIn(manager,
>                                              VIR_LOCK_MANAGER_MIGRATE_CANCEL);
>              virLockManagerShutdown(manager);
>              virLockManagerFree(manager);
>
>        4a2 On source
>
>              virLockManagerCompleteMigrateIn(manager,
>                                              VIR_LOCK_MANAGER_MIGRATE_CANCEL);

Wouldn't this be virLockManagerCompleteMigrateOut?

>
>    4b. On succcess
>
>
>        4b1 On target
>
>              virLockManagerCompleteMigrateIn(manager, 0);
>
>        42 On source
>
>              virLockManagerCompleteMigrateIn(manager, 0);

Likewise?

>              virLockManagerShutdown(manager);
>              virLockManagerFree(manager);
>
>
> Notes:
>
>    - If a lock manager impl does just VM level leases, it can
>      ignore all the resource paths at startup.
>
>    - If a lock manager impl does not support migrate
>      it can return an error from all migrate calls
>
>    - If a lock manger impl does not support hotplug
>      it can return an error from all resource acquire/release calls
>

Overall, this looks workable to me.  As proposed, this assumes a 1:1 
relation between LockManager process and managed VMs.  But I guess you 
can still have a central manager process that manages all the VMs, by 
having the lock manager plugin spawn a simple shim process that does all 
the communication with the central lock manager.

-- 
Eric Blake   eblake at redhat.com    +1-801-349-2682
Libvirt virtualization library http://libvirt.org




More information about the libvir-list mailing list