[libvirt-users] [libvirt] Libvirtd running as root tries to access oneadmin (OpenNebula) NFS mount but throws: error: can’t canonicalize path

TomK tk at mdevsys.com
Tue Apr 12 16:07:50 UTC 2016


Hey John,

Hehe, I got the right guy then.  Very nice!  And very good ideas but I 
may need more time to reread and try them out later tonight.  I'm fully 
in agreement about providing more details.  Can't be accurate in a 
diagnosis if there isn't much data to go on.  This pool option is new to 
me.  Please tell me more on it.  Can't find it in the file below but 
maybe it's elsewhere?

( <pool type="fs"> ) perhaps rather than the "NFS" pool ( e.g. <pool type="netfs"> )


Allright, here's the details:

[root at mdskvm-p01 ~]# rpm -aq|grep -i libvir
libvirt-daemon-driver-secret-1.2.17-13.el7_2.4.x86_64
libvirt-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-network-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-lxc-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-interface-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-config-network-1.2.17-13.el7_2.4.x86_64
libvirt-client-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-storage-1.2.17-13.el7_2.4.x86_64
libvirt-python-1.2.17-2.el7.x86_64
libvirt-glib-0.1.9-1.el7.x86_64
libvirt-daemon-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.4.x86_64
libvirt-daemon-kvm-1.2.17-13.el7_2.4.x86_64
[root at mdskvm-p01 ~]# cat /etc/release
cat: /etc/release: No such file or directory
[root at mdskvm-p01 ~]# cat /etc/*release*
NAME="Scientific Linux"
VERSION="7.2 (Nitrogen)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.2"
PRETTY_NAME="Scientific Linux 7.2 (Nitrogen)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.2:GA"
HOME_URL="http://www.scientificlinux.org//"
BUG_REPORT_URL="mailto:scientific-linux-devel at listserv.fnal.gov"

REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.2
REDHAT_SUPPORT_PRODUCT="Scientific Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.2"
Scientific Linux release 7.2 (Nitrogen)
Scientific Linux release 7.2 (Nitrogen)
Scientific Linux release 7.2 (Nitrogen)
cpe:/o:scientificlinux:scientificlinux:7.2:ga
[root at mdskvm-p01 ~]#

[root at mdskvm-p01 ~]# mount /var/lib/one
[root at mdskvm-p01 ~]# su - oneadmin
Last login: Sat Apr  9 10:39:25 EDT 2016 on pts/0
Last failed login: Tue Apr 12 12:00:57 EDT 2016 from opennebula01 on 
ssh:notty
There were 9584 failed login attempts since the last successful login.
i[oneadmin at mdskvm-p01 ~]$ id oneadmin
uid=9869(oneadmin) gid=9869(oneadmin) 
groups=9869(oneadmin),992(libvirt),36(kvm)
[oneadmin at mdskvm-p01 ~]$ pwd
/var/lib/one
[oneadmin at mdskvm-p01 ~]$ ls -altriR|grep -i root
134320262 drwxr-xr-x. 45 root     root        4096 Apr 12 07:58 ..
[oneadmin at mdskvm-p01 ~]$



[oneadmin at mdskvm-p01 ~]$ cat /var/lib/one//datastores/0/38/deployment.0
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
         <name>one-38</name>
         <vcpu>1</vcpu>
         <cputune>
                 <shares>1024</shares>
         </cputune>
         <memory>524288</memory>
         <os>
                 <type arch='x86_64'>hvm</type>
                 <boot dev='hd'/>
         </os>
         <devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
                 <disk type='file' device='disk'>
                         <source 
file='/var/lib/one//datastores/0/38/disk.0'/>
                         <target dev='hda'/>
                         <driver name='qemu' type='qcow2' cache='none'/>
                 </disk>
                 <disk type='file' device='cdrom'>
                         <source 
file='/var/lib/one//datastores/0/38/disk.1'/>
                         <target dev='hdb'/>
                         <readonly/>
                         <driver name='qemu' type='raw'/>
                 </disk>
                 <interface type='bridge'>
                         <source bridge='br0'/>
                         <mac address='02:00:c0:a8:00:64'/>
                 </interface>
                 <graphics type='vnc' listen='0.0.0.0' port='5938'/>
         </devices>
         <features>
                 <acpi/>
         </features>
</domain>

[oneadmin at mdskvm-p01 ~]$ cat 
/var/lib/one//datastores/0/38/deployment.0|grep -i nfs
[oneadmin at mdskvm-p01 ~]$



Cheers,
Tom K.
------------------------------------------------------------------------------------- 

Living on earth is expensive, but it includes a free trip around the sun.

On 4/12/2016 11:45 AM, John Ferlan wrote:
>
> On 04/12/2016 10:58 AM, TomK wrote:
>> Hey Martin,
>>
>> Thanks very much.  Appreciate you jumping in on this thread.
> Can you provide some more details with respect to which libvirt version
> you have installed. I know I've made changes in this space in more
> recent versions (not the most recent). I'm no root_squash expert, but I
> was the last to change things in the space so that makes me partially
> fluent ;-) in NFS/root_squash speak.
>
> Using root_squash is very "finicky" (to say the least)... It wasn't
> really clear from what you posted how you are attempting to reference
> things. Does the "/var/lib/one//datastores/0/38/deployment.0" XML file
> use a direct path to the NFS volume or does it use a pool? If a pool,
> then what type of pool? It is beneficial to provide as many details as
> possible about the configuration because (so to speak) those that are
> helping you won't know your environment (I've never used OpenNebula) nor
> do I have a 'oneadmin' uid:gid.
>
> What got my attention was the error message "initializing FS storage
> file" with the "file:" prefix to the name and 9869:9869 as the uid:gid
> trying to access the file (I assume that's oneadmin:oneadmin on your
> system).
>
> This says to me that you're trying to use a "file system" pool (e.g
> <pool type="fs">) perhaps rather than the "NFS" pool (e.g. <pool
> type="netfs">). Using an NFS pool certainly has the advantage of
> "knowing how" to deal with the NFS environment. Since libvirt may
> consider this to "just" be a FS file, then it won't necessarily know to
> try to access the file properly (OK dependent upon libvirt version too
> perhaps - the details have been paged out of my memory while I do other
> work).
>
> One other thing that popped out at me:
>
> My /etc/exports has:
>
> /home/bzs/rootsquash/nfs *(rw,sync,root_squash)
>
> which only differs from yours by the 'no_subtree_check'
>
> your environment though seems to have much more "depth" than mine. That
> is you have "//datastores/0/38/disk.1" appended on as the (I assume)
> disk to use. The question then becomes - does every directory in the
> path to that file use "oneadmin:oneadmin" and of course does it have to
> with[out] that extra flag.
>
> Again, I'm no expert just trying to provide ideas and help...
>
> John
>
>> You see, that's just it.  I've configured libvirt .conf files to run as
>> oneadmin.oneadmin (non previlidged) for that NFS share and I can access
>> all the files on that share as oneadmin without error, including the one
>> you listed.  But libvirtd, by default, always starts as root.  So it's
>> doing something as root, despite being configured to access the share as
>> oneadmin.  As oneadmin I can access that file no problem.  Here's how I
>> read the file off the node on which the NFS share is mounted on:
>>
>> [oneadmin at mdskvm-p01 ~]$ ls -altri /var/lib/one//datastores/0/38/disk.1
>> 34642274 -rw-r--r-- 1 oneadmin oneadmin 372736 Apr 5 00:20
>> /var/lib/one//datastores/0/38/disk.1
>> [oneadmin at mdskvm-p01 ~]$ file /var/lib/one//datastores/0/38/disk.1
>> /var/lib/one//datastores/0/38/disk.1: # ISO 9660 CD-ROM filesystem data
>> 'CONTEXT'
>> [oneadmin at mdskvm-p01 ~]$ strings /var/lib/one//datastores/0/38/disk.1|head
>> CD001
>> LINUX CONTEXT
>> GENISOIMAGE ISO 9660/HFS FILESYSTEM CREATOR (C) 1993 E.YOUNGDALE (C)
>> 1997-2006 J.PEARSON/J.SCHILLING (C) 2006-2007 CDRKIT TEAM 2016040500205600
>> 2016040500205600
>> 0000000000000000
>> 2016040500205600
>>
>> CD001
>> 2016040500205600
>> 2016040500205600
>> [oneadmin at mdskvm-p01 ~]$
>>
>> My NFS mount looks as follows ( I have to use root_squash for security
>> reasons.  I'm sure it will work using no_root_squash but that option is
>> not an option here.):
>>
>> [root at mdskvm-p01 ~]# grep nfs /etc/fstab
>> # 192.168.0.70:/var/lib/one/    /var/lib/one/  nfs
>> context=system_u:object_r:nfs_t:s0,soft,intr,rsize=8192,wsize=8192,noauto
>> 192.168.0.70:/var/lib/one/      /var/lib/one/  nfs
>> soft,intr,rsize=8192,wsize=8192,noauto
>> [root at mdskvm-p01 ~]#
>>
>> [root at opennebula01 ~]# cat /etc/exports
>> /var/lib/one/ *(rw,sync,no_subtree_check,root_squash)
>> [root at opennebula01 ~]#
>>
>>
>> So I dug deeper and see that there is a possibility libvirtd is trying
>> to access that NFS mount as root as some level because as root I also
>> get a permission denied to the NFS share above.  Rightly so since I have
>> root_squash that I need to keep.  But libvirtd should be able to access
>> the file as oneadmin as I have above.  It's not and this is what I read
>> on it:
>>
>> https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html
>>
>> Comment is: "The current implementation works for local
>> storage only and returns the canonical path of the volume."
>>
>> But it seems the logic is applied to NFS mounts. Perhaps it shouldn't
>> be?  Anyway to get around this problem?  This is CentOS 7 .
>>
>> My post with OpenNebula is here from which this conversation originates:
>> https://forum.opennebula.org/t/libvirtd-running-as-root-tries-to-access-oneadmin-nfs-mount-error-cant-canonicalize-path/2054/7
>>
>> Cheers,
>> Tom K.
>> -------------------------------------------------------------------------------------
>>
>> Living on earth is expensive, but it includes a free trip around the sun.
>>
>> On 4/12/2016 10:03 AM, Martin Kletzander wrote:
>>> On Mon, Apr 11, 2016 at 08:02:04PM -0400, TomK wrote:
>>>> Hey All,
>>>>
>>>> Wondering if anyone had any suggestions on this topic?
>>>>
>>> The only thing I can come up with is:
>>> '/var/lib/one//datastores/0/38/disk.1': Permission denied
>>>
>>> ... that don't have access to that file.  Could you elaborate on that?
>>>
>>> I think it's either:
>>>
>>> a) you are running the domain as root or
>>>
>>> b) we don't use the domain's uid/gid to canonicalize the path.
>>>
>>> But if read access is enough for canonicalizing that path, I think the
>>> problem is purely with permissions.
>>>
>>>> Cheers,
>>>> Tom K.
>>>> -------------------------------------------------------------------------------------
>>>>
>>>>
>>>> Living on earth is expensive, but it includes a free trip around the
>>>> sun.
>>>>
>>>> On 4/9/2016 11:08 AM, TomK wrote:
>>>>> Adding in libvir-list.
>>>>>
>>>>> Cheers,
>>>>> Tom K.
>>>>> -------------------------------------------------------------------------------------
>>>>>
>>>>>
>>>>> Living on earth is expensive, but it includes a free trip around the
>>>>> sun.
>>>>>
>>>>> On 4/7/2016 7:32 PM, TomK wrote:
>>>>>> Hey All,
>>>>>>
>>>>>> I've an issue where libvirtd tries to access an NFS mount but errors
>>>>>> out with: can't canonicalize path '/var/lib/one//datastores/0 .  The
>>>>>> unprevilidged user is able to read/write fine to the share.
>>>>>> root_squash is used and for security reasons no_root_squash cannot be
>>>>>> used.
>>>>>>
>>>>>> On the controller and node SELinux is disabled.
>>>>>>
>>>>>> [oneadmin at mdskvm-p01 ~]$ virsh -d 1 --connect qemu:///system create
>>>>>> /var/lib/one//datastores/0/38/deployment.0
>>>>>> create: file(optdata): /var/lib/one//datastores/0/38/deployment.0
>>>>>> error: Failed to create domain from
>>>>>> /var/lib/one//datastores/0/38/deployment.0
>>>>>> error: can't canonicalize path
>>>>>> '/var/lib/one//datastores/0/38/disk.1': Permission denied
>>>>>>
>>>>>> I added some debug flags to get more info and added -x to the deploy
>>>>>> script. Closest I get to more details is this:
>>>>>>
>>>>>> 2016-04-06 04:15:35.945+0000: 14072: debug :
>>>>>> virStorageFileBackendFileInit:1441 : initializing FS storage file
>>>>>> 0x7f6aa4009000 (file:/var/lib/one//datastores/0/38/disk.1)[9869:9869]
>>>>>> 2016-04-06 04:15:35.954+0000: 14072: error :
>>>>>> virStorageFileBackendFileGetUniqueIdentifier:1523 : can't
>>>>>> canonicalize path '/var/lib/one//datastores/0/38/disk.1':
>>>>>>
>>>>>> https://www.redhat.com/archives/libvir-list/2014-May/msg00194.html
>>>>>>
>>>>>> Comment is: "The current implementation works for local
>>>>>> storage only and returns the canonical path of the volume."
>>>>>>
>>>>>> But it seems the logic is applied to NFS mounts. Perhaps it shouldn't
>>>>>> be?  Anyway to get around this problem?  This is CentOS 7 .
>>>>>>
>>>>>> Cheers,
>>>>>> Tom K.
>>>>>> -------------------------------------------------------------------------------------
>>>>>>
>>>>>>
>>>>>> Living on earth is expensive, but it includes a free trip around the
>>>>>> sun.
>>>>>>
>>>>>> _______________________________________________
>>>>>> libvirt-users mailing list
>>>>>> libvirt-users at redhat.com
>>>>>> https://www.redhat.com/mailman/listinfo/libvirt-users
>>>>> -- 
>>>>> libvir-list mailing list
>>>>> libvir-list at redhat.com
>>>>> https://www.redhat.com/mailman/listinfo/libvir-list
>>>> -- 
>>>> libvir-list mailing list
>>>> libvir-list at redhat.com
>>>> https://www.redhat.com/mailman/listinfo/libvir-list
>>>
>>> _______________________________________________
>>> libvirt-users mailing list
>>> libvirt-users at redhat.com
>>> https://www.redhat.com/mailman/listinfo/libvirt-users
>>
>>
>> --
>> libvir-list mailing list
>> libvir-list at redhat.com
>> https://www.redhat.com/mailman/listinfo/libvir-list
>>




More information about the libvirt-users mailing list