[Libguestfs] Libguestfs can't launch with one of the disk images in the RHEV cluster

Исаев Виталий Анатольевич isaev at fintech.ru
Thu Jan 16 07:29:43 UTC 2014


-----Original Message-----
From: Richard W.M. Jones [mailto:rjones at redhat.com]
Sent: Tuesday, January 14, 2014 9:43 PM
To: Исаев Виталий Анатольевич
Cc: libguestfs at redhat.com
Subject: Re: [Libguestfs] Libguestfs can't launch with one of the disk images in the RHEV cluster



On Tue, Jan 14, 2014 at 02:57:35PM +0000, Исаев Виталий Анатольевич wrote:

> Dear Rich, thank you for a prompt reply on my question. The similar

> problems have been found with all of the rest Thin Provisioned disks

> in the cluster, while all the Preallocated disks were handled with

> libguestfs correctly. I guess these issues were caused by (b) and

> probably (c) reasons:

>

> The backing file of any of the thin provisioned disks does not exist. For instance let’s consider the /dev/mapper/1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4465--aed2--912119fcf67f symbolic link pointing to /dev/dm-30 :

> [root at rhevh1 mapper]# pwd

> /dev/mapper

> [root at rhevh1 mapper]# qemu-img info

> 1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4465--aed2--9

> 12119fcf67f

> image:

> 1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4465--aed2--9

> 12119fcf67f

> file format: qcow2

> virtual size: 40G (42949672960 bytes)

> disk size: 0

> cluster_size: 65536

> backing file:

> ../6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e9

> 0023e5

> [root at rhevh1 mapper]# ll

> ../6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e9

> 0023e5

> ls: cannot access

> ../6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e9

> 0023e5: No such file or directory

>

> Note that /dev/dm-30 is not accessible with libguestfs.

>

> Now I am trying to find the files with the same name. As a result I receive three symbolic links pointing to /dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cbe36298-6397-4ffa-ba8c-5f64e90023e5:

> [root at rhevh1 mapper]# find / -name

> cbe36298-6397-4ffa-ba8c-5f64e90023e5

> /dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cbe36298-6397-4ffa-ba8c-5f64

> e90023e5

> /var/lib/stateless/writable/rhev/data-center/mnt/blockSD/1a9aa971-f81f

> -4ad8-932f-607034c924fc/images/6439863f-2d4e-48ae-a150-f9054650789c/cb

> e36298-6397-4ffa-ba8c-5f64e90023e5

> /rhev/data-center/mnt/blockSD/1a9aa971-f81f-4ad8-932f-607034c924fc/ima

> ges/6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e

> 90023e5

>

> In turn, the

> /dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cbe36298-6397-4ffa-ba8c-5f64

> e90023e5 file is a symbolic link which points to the /dev/dm-19

>

> At last I am trying to launch libguestfs with block device directly:

> [root at rhevh1 mapper]# qemu-img info /dev/dm-19

> image: /dev/dm-19

> file format: raw

> virtual size: 40G (42949672960 bytes)

> disk size: 0

> [root at rhevh1 mapper]# python

> Python 2.6.6 (r266:84292, Oct 12 2012, 14:23:48) [GCC 4.4.6 20120305

> (Red Hat 4.4.6-4)] on linux2 Type "help", "copyright", "credits" or

> "license" for more information.

> >>> import guestfs

> >>> g = guestfs.GuestFS()

> >>> g.add_drive_opts("/dev/dm-19",readonly=1)

> >>> g.launch()

> >>> g.lvs()

> []

> >>> g.pvs()

> []

> >>> g.list_partitions()

> ['/dev/vda1', '/dev/vda2']

> >>> g.inspect_os()

> ['/dev/vda1']



This works because you're accessing the backing disk, not the top disk.  Since the backing disk (in this case) doesn't itself have a backing disk, qemu has no problem opening it.



> Now I’m a little bit confused with the results of my research. I found

> that VM with the only disk attached has at least two block devices

> mapped to the hypervisor’s file system in fact – I mean

> /dev/dm-19 (raw) and /dev/dm-30 (qcow2). The RHEV-M API (aka Python

> oVirt SDK) provides no info about the first one, but the second cannot

> be accessed from libguestfs.  I have an urgent need to work with a

> chosen VM disk images through the libguestfs layer, but I don’t know

> which images belong to every VM exactly. It seems like I’m going the

> hard way :) Sincerely,



Basically you need to find out which directory RHEV-M itself starts qemu in.  Try going onto the node and doing:



  ps ax | grep qemu

  ls -l /proc/PID/cwd



substituting PID for some of the qemu process IDs.



My guess would be some subdirectory of /rhev/data-center/mnt/blockSD/



Then start your test script from that directory.



Another thing you could do is to file a bug against oVirt asking them not to use relative paths for backing disks, since plenty of people have problems with this.



Rich.



--

Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones libguestfs lets you edit virtual machines.  Supports shell scripting, bindings from many languages.  http://libguestfs.org



Thank you, Richard. I’ve posted a message to the RH bugtracker: https://bugzilla.redhat.com/show_bug.cgi?id=1053684

Further work on this problem made the things even more complicated, because I found that several qcow2 disks have qcow2 disks as backing disks in turn. So now I have to resolve qcow2 to raw disks recursively in order to access them with libguestfs.


Vitaly Isaev
Software engineer
Information security department
Fintech JSC, Moscow, Russia


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/libguestfs/attachments/20140116/20510434/attachment.htm>


More information about the Libguestfs mailing list