[virt-tools-list] Clone on started pool fails

Cole Robinson crobinso at redhat.com
Tue Nov 24 15:07:29 UTC 2009


On 11/24/2009 08:37 AM, Frédéric Grelot wrote:
> Hi, 
> 
> I just experienced an error while trying to clone a vm whose disk is on a pool (other than /var/lib...).
> The error is triggered by VirtualDisk.py:731 and says :
> ERROR    Could not determine original disk information: Size must be specified for non existent volume path '/raid5/vms/disk.qcow2'
> 
> where disk is the source vm disk, and of course exists... /raid5 is an lvm mount, on top of a raid5 disk array.
> After reading the code in VirtualDisk.py, i suspected a problem with the pool, and by "stopping" it (in virt-manager/storage manager), cloning suddently works without problem!
> I tried with the cli (virt-clone --prompt) version, as of the gui (virt-manager/clone), and get the exact same error...
> When I first tried, the disk was a plain 18Gb ".img" file, when I realized that, i converted it to a qcow2 one (7gb).
> 
> By the way, I just realized that it may be linked to another problem (... 2s later : confirmed) : whenever I try to add a disk to a new vm, or when I want to create a vm, I always get an error : "Name 'any_disk.ext' already in use by another volume.", where any_disk.ext can be a .img raw disk, qcow2, etc...  ...and of course is not an already used disk! I just tested by "stopping" the pool (and the "browse local" option of virt-manager), and adding a disk works! (At last, I don't need to add disks by hand in /etc/libvirt/qemu/*.xml anymore...)
> 
> I'm not sure whether it is a bug or a mis-configuration, I already heard that I should better use only the /var/lib/...image/ directory as a pool, but using my /raid5 volume gives me more flexibility and visibility, so I'd prefer keep using it...
> 
> So I'll take any help to solve this, and, if you need more information, I'll give them to you with pleasure!
> 
> Frederic.
> 
> Pool definition :
> <pool type='dir'>
>   <name>raid5</name>
>   <uuid>xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx</uuid>
>   <capacity>0</capacity>
>   <allocation>0</allocation>
>   <available>0</available>
>   <source>
>   </source>
>   <target>
>     <path>/raid5/vms/</path>
>     <permissions>
>       <mode>0700</mode>
>       <owner>0</owner>
>       <group>0</group>
>     </permissions>
>   </target>
> </pool>
> 
> 
> Versions :
> # rpm -qa | grep virt
> virt-top-1.0.4-1.fc12.1.x86_64
> virt-manager-0.8.0-7.fc12.noarch
> virt-mem-0.3.1-9.fc12.x86_64
> virt-viewer-0.2.0-1.fc12.x86_64
> libvirt-python-0.7.1-15.fc12.x86_64
> libvirt-client-0.7.1-15.fc12.x86_64
> python-virtinst-0.500.0-5.fc12.noarch
> libvirt-0.7.1-15.fc12.x86_64
> virt-v2v-0.2.0-1.fc12.noarch
> 


Hmm, certainly sounds like something is going wrong here. Can you also
provide:

~/.virt-manager/virt-manager.log
virsh --connect qemu:///system pool-list --all
virsh --connect qemu:///system vol-list <poolname> for each running pool

Thanks,
Cole




More information about the virt-tools-list mailing list