[libvirt] [PATCH v2] storage: vz storage pool support

John Ferlan jferlan at redhat.com
Thu Dec 8 12:17:18 UTC 2016



On 12/08/2016 04:19 AM, Maxim Nestratov wrote:
> 08-Dec-16 02:22, John Ferlan пишет:
> 
>> [...]
>>
>>>> I see what you mean; however, IMO vstorage should be separate. Maybe
>>>> there's another opinion out there, but since you're requiring
>>>> "something" else to be installed in order to get the WITH_VSTORAGE
>>>> to be
>>>> set to 1, then a separate file is in order.
>>>>
>>>> Not sure they're comparable, but zfs has its own. Having separated
>>>> vstorage reduces the chance that some day some incompatible logic is
>>>> added/altered in the *fs.c (or vice versa).
>>> Ok. I will try.
>>>
>>>> I think you should consider the *_fs.c code to be the "default" of
>>>> sorts. That is default file/dir structure with netfs added in. The
>>>> vstorage may just be some file system, but it's not something (yet) on
>>>> "every" distribution.
>>> I did not understand actually, what you mean  "be the "default" of
>>> sorts."
>>> As I have understood - what I need to do is to create backend_vstorage.c
>>> with all create/delete/* functionality.
>>>
>> Sorry - I was trying to think of a better way to explain... The 'fs' and
>> 'nfs' pool are default of sorts because one can "ls" (on UNIX/Linux) or
>> "dir" (on Windows) and get a list of files.
>>
>> "ls" and "dir" are inherent to the OS, while in this case vstorage
>> commands are installed separately.
> 
> Once you mounted your vstorage cluster to a local filesystem you can
> also "ls" it. Thus, I can't see much difference from nfs here.
> 

So if it's more like NFS, then how does one ensure that the local userid
X is the same as the remote userid X? NFS has a root-squashing concept
that results in numerous shall we say "interesting" issues.

Check out the virFileOpen*, virDirCreate, and virFileRemove...

Also what about viFileIsShareFSType? And security_selinux.c code for
NFS? If you use cscope, just search on NFS.

In the virStorageBackendVzStart, I see:

   VSTORAGE_MOUNT -c $pool.source.name $pool.target.path

where VSTORAGE_MOUNT is a build (configure.ac) definition that is the
"Location or name of vstorage-mount program" which would only be set if
the proper package was installed.

In the virStorageBackendVzfindPoolSources, I see:

   VSTORAGE discover

which I assume generates some list of remote "services" (for lack of a
better term) which can be used as/for pool.source.name in order to be
well mounted by the VSTORAGE_MOUNT program.

Compare that to NFS, which uses mount which is included in well every
distro I can think of. That's a big difference. Also let's face it, NFS
has been the essential de facto goto tool to access remote storage for a
long time. Personally, I'd rather see the NFS code split out of the
*_fs.c backend, but I don't have the desire/time to do it - so it stays
as is.

>> When you create a vstorage "file" is that done via touch? or edit some
>> path or some other "common" OS command?  Or is there a vstorage command
>> that needs to be used.  If the former, then using the common
>> storage_backend API's should be possible.
> 
> vstorage is just another "remote filesystem" type of distributed
> software defined storage. In terms of starting to use it, it doesn't
> differ from nfs - you should mount it and then you can use any POSIX
> calls to control files and directories resided on it.

Here's a sample nfs pool xml I have - I changed the source/target path
and didn't define a host.

<pool type='netfs'>
  <name>nfs</name>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <available unit='bytes'>0</available>
  <source>
    <host name='localhost'/>
    <dir path='/path/to/source'/>
    <format type='nfs'/>
  </source>
  <target>
    <path>/path/to/target</path>
    <permissions>
      <mode>0700</mode>
      <owner>0</owner>
      <group>0</group>
    </permissions>
  </target>
</pool>

That is vastly different than what was in the cover:

<pool type='vstorage'>
  <name>VZ</name>
  <uuid>5f45665b-66fa-4b18-84d1-248774cff3a1</uuid>
  <capacity unit='bytes'>107374182400</capacity>
  <allocation unit='bytes'>1441144832</allocation>
  <available unit='bytes'>105933037568</available>
  <source>
    <name>vz7-vzstorage</name>
  </source>
  <target>
    <path>/vzstorage_pool</path>
    <permissions>
      <mode>0700</mode>
      <owner>0</owner>
      <group>0</group>
    </permissions>
  </target>
</pool>

What causes "vz7-vzstorage" to be defined? Something from the 'VSTORAGE'
command. I would think that is that essentially similar to how
glusterfs, rbd, or sheepdog uses a source <name> field. Note that each
of those include a <host name='$host' [port='#']/> definition, although
this vstorage XML doesn't.

Thus it seems vzstorage is really not a "local" filesystem, correct? If
so, then should one really be allowing "local" things like chown, chmod,
etc. to be executed? What kind of "configuration and/or validation of
trust" takes place via vstorage provided tools in order to allow a user
on the local host to access the storage on the remote host.

John
> 
> Maxim
>>
>> John
>>
>>
>>>> Also I forgot to mention yesterday - you need to update the
>>>> docs/formatstorage.html.in at the very least and also the storage
>>>> driver
>>>> page docs/storage.html.in.
>>>> In addition there are storagepool tests (xml2xml) that would need to be
>>>> updated to validate the new storage pool type. The tests would "show"
>>>> how the pool XML would appear and validate whatever parsing has been
>>>> done.
>>> I know. Will fix.
>>>
>> [...]
> 




More information about the libvir-list mailing list