[libvirt] [PATCH rfc v2 0/8] fspool: backend directory

John Ferlan jferlan at redhat.com
Fri Sep 23 20:51:53 UTC 2016



On 09/23/2016 11:45 AM, Daniel P. Berrange wrote:
> On Fri, Sep 23, 2016 at 11:38:10AM -0400, John Ferlan wrote:
>>
>>
>> On 09/21/2016 12:17 PM, Maxim Nestratov wrote:
>>>
>>>> 20 сент. 2016 г., в 23:52, John Ferlan <jferlan at redhat.com> написал(а):
>>>>
>>>>
>>>>
>>>>> On 09/15/2016 03:32 AM, Olga Krishtal wrote:
>>>>> Hi everyone, we would like to propose the first implementation of fspool
>>>>> with directory backend.
>>>>>
>>>>> Filesystem pools is a facility to manage filesystems resources similar
>>>>> to how storage pools manages volume resources. Furthermore new API follows
>>>>> storage API closely where it makes sense. Uploading/downloading operations
>>>>> are not defined yet as it is not obvious how to make it properly. I guess
>>>>> we can use some kind of tar to make a stream from a filesystem. Please share
>>>>> you thoughts on this particular issue.
>>>>
>>>>
>>>> So how do you differentiate between with the existing <pool type="fs">
>>>
>>> Pool type=fs still provides volumes, i. e. block devices rather than filesystem, though this storage pool can mount file systems resided on a source block device. 
>>>
>>>>
>>>> http://libvirt.org/storage.html#StorageBackendFS
>>>>
>>>> Sure the existing fs pool requires/uses a source block device as the
>>>> source path and this new variant doesn't require that source but seems
>>>> to use some item in order to dictate how to "define" the source on the
>>>> fly. Currently only a "DIR" is created - so how does that differ from a
>>>> "dir" pool.
>>>>
>>>
>>> Same here, storage "dir" provides files, which are in fact block devices for guests. While filesystem pool "dir" provides guests with file systems. 
>>>
>>>
>>
>> So then what is the purpose of providing a whole new storage driver
>> subsystem?  If you consider the existing storage driver is meant to
>> handle storage pools of various types and the "pool" commands/API's are
>> the "means" to manage those storage backend types, I'm still failing to
>> see the advantage of (essentially) copying the storage driver when it
>> seems you're really trying to write new backends that provide some
>> specific functionality.
> 
> This is a completely different beast to the existing storage pools
> driver code.
> 
> Ultimately the difference here is about what's being managed. The
> existing storage pools code is managing storage volumes, which are
> essentially blobs which are used to provide virtual disks.
> 
> This FS pools code is about managing filesystem trees, which are
> essentially directories used to provide filesystem passthrough
> feature for guests, most compelling for containers.
> 
> Trying to shoe-horn both concepts into the same API is really
> not very attractive, as you have fundamentally diffrent logic
> requirements for managing the entries in the respective pools
> 

OK fair enough, might be something worth of adding to the description
(cover letter). Whether it's documented further in the patches I didn't
check. The whole series is a 8 rather mostly large patches.

I think the other point/concern is that it appears as if the fspool
driver was started by copying storage_driver and then hacking out what
wasn't needed, rather than deciding what needs to be there to build up
such a fs pass through driver. At least that's my impression from
scanning the first couple of patches.

For example, patch 1 first file - libvirt-fs.h it's essentially
libvirt-storage.h copied with some things removed. The first enum has
all build and build overwrite flags. Doesn't seem any of those are used
in the driver code on a quick scan.

The create with build flag was added to allow someone to combine those
pool-create and pool-build type functions because for some backends it
doesn't make sense to "build" as it's already there. That to me seems
like it could be a "design decision" for this new driver. Does a FS pool
need to handle build with [no] overwrite? What would overwrite be?

I think rather than just copy what the storage pool does, I would think
the new driver could "build up" what it needs based on some consensus
based on what makes sense for the usage model.


>> Having a guest mount a host file system would seem to be possible
>> through other means. I also start wondering about security implications
>> for either side (haven't put too much thought into it). What can the
>> guest put "on" the host file system and vice versa where different
>> security policies may exist for allowing such placement.
>>
>> Perhaps rather than a large dump of code the RFC should state the goal,
>> purpose, usage, etc. and see if that's what the community wants or is
>> willing to provide feedback on.
> 
> This was previously done in the mailing list many months ago now.
> 

Well a pointer would have been nice... Obviously I didn't remember it!
There was an fspools v1 posted 8/19. I think there was an assumption
that list readers/reviewers would remember some original RFC. I didn't.
I've just been going through older patches that haven't had review and
this just came up as "next" (actually I had started thinking about the
v1 when v2 showed up).

John




More information about the libvir-list mailing list