[Ovirt-devel] [PATCH server] Implement LVM taskomatic
clalance at redhat.com
Fri Nov 7 12:36:32 UTC 2008
Scott Seago wrote:
>> + volinfo = volptr.info
>> + storage_volume = StorageVolume.factory(db_pool.get_type_label)
>> + storage_volume.path = volptr.path
>> + storage_volume.size = volinfo.capacity / 1024
>> + storage_volume.storage_pool_id = db_pool.id
>> + storage_volume.write_attribute(storage_volume.volume_name, volname)
>> + storage_volume.lv_owner_perms = owner
>> + storage_volume.lv_group_perms = group
>> + storage_volume.lv_mode_perms = mode
>> + storage_volume.save
> You should probably use save! here as we're doing elsewhere so that it
> raises an exception upon failure.
Done. I actually added a separate commit that changes all of the uses of .save
in taskomatic to .save!
>> + begin
>> + lvm_libvirt_pool = LibvirtPool.factory(lvm_db_pool)
>> + lvm_libvirt_pool.connect(conn)
>> + begin
>> + libvirt_volume = lvm_libvirt_pool.lookup_vol_by_name(lvm_db_volume.lv_name)
>> + # FIXME: we actually probably want to zero out the whole volume here, so
>> + # we aren't potentially leaking data from one user to another. There
>> + # are two problems, though:
>> + # 1) I'm not sure how I would go about zero'ing the data on a remote
>> + # machine, since there is no "libvirt_write_data" call
>> + # 2) This could potentially take quite a while, so we want to spawn
>> + # off another thread to do it
>> + libvirt_volume.delete
>> + LvmStorageVolume.delete(lvm_db_volume.id)
> We need to get this working with lvm_db_volume.destroy, since the error
> you were getting with that might mean that it's failing to do some
> necessary cleanup that delete is skipping. But we can probably do that
> after pushing this patch.
Yes, agreed. For now, I've added a comment here explaining that I ran into a
problem with it, but it should be fixed.
With the above two fixes, I've now committed and pushed this to next. Thanks
for the review!
More information about the ovirt-devel