<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Generator" content="Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
p.MsoPlainText, li.MsoPlainText, div.MsoPlainText
{mso-style-priority:99;
mso-style-link:"Текст Знак";
margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-fareast-language:EN-US;}
span.a
{mso-style-name:"Текст Знак";
mso-style-priority:99;
mso-style-link:Текст;
font-family:"Calibri","sans-serif";}
span.shorttext
{mso-style-name:short_text;}
span.hps
{mso-style-name:hps;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:2.0cm 42.5pt 2.0cm 3.0cm;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="RU" link="blue" vlink="purple">
<div class="WordSection1">
<p class="MsoPlainText"><span lang="EN-US" style="mso-fareast-language:RU">-----Original Message-----<br>
From: Richard W.M. Jones [mailto:rjones@redhat.com] <br>
Sent: Tuesday, January 14, 2014 9:43 PM<br>
To: </span><span style="mso-fareast-language:RU">Исаев</span><span style="mso-fareast-language:RU">
</span><span style="mso-fareast-language:RU">Виталий</span><span style="mso-fareast-language:RU">
</span><span style="mso-fareast-language:RU">Анатольевич</span><span lang="EN-US" style="mso-fareast-language:RU"><br>
Cc: libguestfs@redhat.com<br>
Subject: Re: [Libguestfs] Libguestfs can't launch with one of the disk images in the RHEV cluster</span><span lang="EN-US"><o:p></o:p></span></p>
<p class="MsoPlainText"><span lang="EN-US"><o:p> </o:p></span></p>
<p class="MsoPlainText">On Tue, Jan 14, 2014 at 02:57:35PM +0000, Исаев Виталий Анатольевич wrote:<o:p></o:p></p>
<p class="MsoPlainText">> Dear Rich, thank you for a prompt reply on my question. The similar
<o:p></o:p></p>
<p class="MsoPlainText">> problems have been found with all of the rest Thin Provisioned disks
<o:p></o:p></p>
<p class="MsoPlainText">> in the cluster, while all the Preallocated disks were handled with
<o:p></o:p></p>
<p class="MsoPlainText">> libguestfs correctly. I guess these issues were caused by (b) and
<o:p></o:p></p>
<p class="MsoPlainText">> probably (c) reasons:<o:p></o:p></p>
<p class="MsoPlainText">> <o:p></o:p></p>
<p class="MsoPlainText">> The backing file of any of the thin provisioned disks does not exist. For instance let’s consider the /dev/mapper/1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4465--aed2--912119fcf67f symbolic link pointing to /dev/dm-30
:<o:p></o:p></p>
<p class="MsoPlainText">> [root@rhevh1 mapper]# pwd<o:p></o:p></p>
<p class="MsoPlainText">> /dev/mapper<o:p></o:p></p>
<p class="MsoPlainText">> [root@rhevh1 mapper]# qemu-img info <o:p></o:p></p>
<p class="MsoPlainText">> 1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4465--aed2--9<o:p></o:p></p>
<p class="MsoPlainText">> 12119fcf67f<o:p></o:p></p>
<p class="MsoPlainText">> image: <o:p></o:p></p>
<p class="MsoPlainText">> 1a9aa971--f81f--4ad8--932f--607034c924fc-666faa62--da73--4465--aed2--9<o:p></o:p></p>
<p class="MsoPlainText">> 12119fcf67f<o:p></o:p></p>
<p class="MsoPlainText">> file format: qcow2<o:p></o:p></p>
<p class="MsoPlainText">> virtual size: 40G (42949672960 bytes)<o:p></o:p></p>
<p class="MsoPlainText">> disk size: 0<o:p></o:p></p>
<p class="MsoPlainText">> cluster_size: 65536<o:p></o:p></p>
<p class="MsoPlainText">> backing file: <o:p></o:p></p>
<p class="MsoPlainText">> ../6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e9<o:p></o:p></p>
<p class="MsoPlainText">> 0023e5<o:p></o:p></p>
<p class="MsoPlainText">> [root@rhevh1 mapper]# ll <o:p></o:p></p>
<p class="MsoPlainText">> ../6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e9<o:p></o:p></p>
<p class="MsoPlainText">> 0023e5<o:p></o:p></p>
<p class="MsoPlainText">> ls: cannot access <o:p></o:p></p>
<p class="MsoPlainText">> ../6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e9<o:p></o:p></p>
<p class="MsoPlainText">> 0023e5: No such file or directory<o:p></o:p></p>
<p class="MsoPlainText">><o:p> </o:p></p>
<p class="MsoPlainText">> Note that /dev/dm-30 is not accessible with libguestfs.<o:p></o:p></p>
<p class="MsoPlainText">> <o:p></o:p></p>
<p class="MsoPlainText">> Now I am trying to find the files with the same name. As a result I receive three symbolic links pointing to /dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cbe36298-6397-4ffa-ba8c-5f64e90023e5:<o:p></o:p></p>
<p class="MsoPlainText">> [root@rhevh1 mapper]# find / -name <o:p></o:p></p>
<p class="MsoPlainText">> cbe36298-6397-4ffa-ba8c-5f64e90023e5<o:p></o:p></p>
<p class="MsoPlainText">> /dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cbe36298-6397-4ffa-ba8c-5f64<o:p></o:p></p>
<p class="MsoPlainText">> e90023e5<o:p></o:p></p>
<p class="MsoPlainText">> /var/lib/stateless/writable/rhev/data-center/mnt/blockSD/1a9aa971-f81f<o:p></o:p></p>
<p class="MsoPlainText">> -4ad8-932f-607034c924fc/images/6439863f-2d4e-48ae-a150-f9054650789c/cb<o:p></o:p></p>
<p class="MsoPlainText">> e36298-6397-4ffa-ba8c-5f64e90023e5<o:p></o:p></p>
<p class="MsoPlainText">> /rhev/data-center/mnt/blockSD/1a9aa971-f81f-4ad8-932f-607034c924fc/ima<o:p></o:p></p>
<p class="MsoPlainText">> ges/6439863f-2d4e-48ae-a150-f9054650789c/cbe36298-6397-4ffa-ba8c-5f64e<o:p></o:p></p>
<p class="MsoPlainText">> 90023e5<o:p></o:p></p>
<p class="MsoPlainText">> <o:p></o:p></p>
<p class="MsoPlainText">> In turn, the <o:p></o:p></p>
<p class="MsoPlainText">> /dev/1a9aa971-f81f-4ad8-932f-607034c924fc/cbe36298-6397-4ffa-ba8c-5f64<o:p></o:p></p>
<p class="MsoPlainText">> e90023e5 file is a symbolic link which points to the /dev/dm-19<o:p></o:p></p>
<p class="MsoPlainText">> <o:p></o:p></p>
<p class="MsoPlainText">> At last I am trying to launch libguestfs with block device directly:<o:p></o:p></p>
<p class="MsoPlainText">> [root@rhevh1 mapper]# qemu-img info /dev/dm-19<o:p></o:p></p>
<p class="MsoPlainText">> image: /dev/dm-19<o:p></o:p></p>
<p class="MsoPlainText">> file format: raw<o:p></o:p></p>
<p class="MsoPlainText">> virtual size: 40G (42949672960 bytes)<o:p></o:p></p>
<p class="MsoPlainText">> disk size: 0<o:p></o:p></p>
<p class="MsoPlainText">> [root@rhevh1 mapper]# python<o:p></o:p></p>
<p class="MsoPlainText">> Python 2.6.6 (r266:84292, Oct 12 2012, 14:23:48) [GCC 4.4.6 20120305
<o:p></o:p></p>
<p class="MsoPlainText">> (Red Hat 4.4.6-4)] on linux2 Type "help", "copyright", "credits" or
<o:p></o:p></p>
<p class="MsoPlainText">> "license" for more information.<o:p></o:p></p>
<p class="MsoPlainText">> >>> import guestfs<o:p></o:p></p>
<p class="MsoPlainText">> >>> g = guestfs.GuestFS()<o:p></o:p></p>
<p class="MsoPlainText">> >>> g.add_drive_opts("/dev/dm-19",readonly=1)<o:p></o:p></p>
<p class="MsoPlainText">> >>> g.launch()<o:p></o:p></p>
<p class="MsoPlainText">> >>> g.lvs()<o:p></o:p></p>
<p class="MsoPlainText">> []<o:p></o:p></p>
<p class="MsoPlainText">> >>> g.pvs()<o:p></o:p></p>
<p class="MsoPlainText">> []<o:p></o:p></p>
<p class="MsoPlainText">> >>> g.list_partitions()<o:p></o:p></p>
<p class="MsoPlainText">> ['/dev/vda1', '/dev/vda2']<o:p></o:p></p>
<p class="MsoPlainText">> >>> g.inspect_os()<o:p></o:p></p>
<p class="MsoPlainText">> ['/dev/vda1']<o:p></o:p></p>
<p class="MsoPlainText"><o:p> </o:p></p>
<p class="MsoPlainText">This works because you're accessing the backing disk, not the top disk. Since the backing disk (in this case) doesn't itself have a backing disk, qemu has no problem opening it.<o:p></o:p></p>
<p class="MsoPlainText"><o:p> </o:p></p>
<p class="MsoPlainText">> Now I’m a little bit confused with the results of my research. I found
<o:p></o:p></p>
<p class="MsoPlainText">> that VM with the only disk attached has at least two block devices
<o:p></o:p></p>
<p class="MsoPlainText">> mapped to the hypervisor’s file system in fact – I mean<o:p></o:p></p>
<p class="MsoPlainText">> /dev/dm-19 (raw) and /dev/dm-30 (qcow2). The RHEV-M API (aka Python
<o:p></o:p></p>
<p class="MsoPlainText">> oVirt SDK) provides no info about the first one, but the second cannot
<o:p></o:p></p>
<p class="MsoPlainText">> be accessed from libguestfs. I have an urgent need to work with a
<o:p></o:p></p>
<p class="MsoPlainText">> chosen VM disk images through the libguestfs layer, but I don’t know
<o:p></o:p></p>
<p class="MsoPlainText">> which images belong to every VM exactly. It seems like I’m going the
<o:p></o:p></p>
<p class="MsoPlainText">> hard way :) Sincerely,<o:p></o:p></p>
<p class="MsoPlainText"><o:p> </o:p></p>
<p class="MsoPlainText">Basically you need to find out which directory RHEV-M itself starts qemu in. Try going onto the node and doing:<o:p></o:p></p>
<p class="MsoPlainText"><o:p> </o:p></p>
<p class="MsoPlainText"> ps ax | grep qemu<o:p></o:p></p>
<p class="MsoPlainText"> ls -l /proc/PID/cwd<o:p></o:p></p>
<p class="MsoPlainText"><o:p> </o:p></p>
<p class="MsoPlainText">substituting PID for some of the qemu process IDs.<o:p></o:p></p>
<p class="MsoPlainText"><o:p> </o:p></p>
<p class="MsoPlainText">My guess would be some subdirectory of /rhev/data-center/mnt/blockSD/<o:p></o:p></p>
<p class="MsoPlainText"><o:p> </o:p></p>
<p class="MsoPlainText">Then start your test script from that directory.<o:p></o:p></p>
<p class="MsoPlainText"><o:p> </o:p></p>
<p class="MsoPlainText">Another thing you could do is to file a bug against oVirt asking them not to use relative paths for backing disks, since plenty of people have problems with this.<o:p></o:p></p>
<p class="MsoPlainText"><o:p> </o:p></p>
<p class="MsoPlainText">Rich.<o:p></o:p></p>
<p class="MsoPlainText"><o:p> </o:p></p>
<p class="MsoPlainText">--<o:p></o:p></p>
<p class="MsoPlainText"><span lang="EN-US">Richard Jones, Virtualization Group, Red Hat
</span><a href="http://people.redhat.com/~rjones"><span lang="EN-US" style="color:windowtext;text-decoration:none">http://people.redhat.com/~rjones</span></a><span lang="EN-US"> libguestfs lets you edit virtual machines.
</span>Supports shell scripting, bindings from many languages. <a href="http://libguestfs.org">
<span style="color:windowtext;text-decoration:none">http://libguestfs.org</span></a><o:p></o:p></p>
<p class="MsoPlainText"><span style="color:black"><o:p> </o:p></span></p>
<p class="MsoPlainText" style="margin-left:35.4pt"><span lang="EN-US" style="color:black">Thank you, Richard. I’ve posted a message to the RH bugtracker:
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1053684">https://bugzilla.redhat.com/show_bug.cgi?id=1053684</a><o:p></o:p></span></p>
<p class="MsoPlainText" style="margin-left:35.4pt"><span class="hps"><span lang="EN">Further work on</span></span><span class="shorttext"><span lang="EN">
</span></span><span class="hps"><span lang="EN">this</span></span><span class="shorttext"><span lang="EN">
</span></span><span class="hps"><span lang="EN">problem made the things even more complicated, because I found that several qcow2 disks have qcow2 disks as backing disks in turn. So now I have to resolve qcow2 to raw disks recursively in order to access them
with libguestfs.<o:p></o:p></span></span></p>
<p class="MsoPlainText"><o:p> </o:p></p>
<p class="MsoNormal" style="margin-left:35.4pt"><span lang="EN-US" style="font-size:10.0pt;mso-fareast-language:RU">Vitaly Isaev<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left:35.4pt"><span lang="EN-US" style="font-size:10.0pt;color:gray;mso-fareast-language:RU">Software engineer<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left:35.4pt"><span lang="EN-US" style="font-size:10.0pt;color:gray;mso-fareast-language:RU">Information security department<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-left:35.4pt"><span lang="EN-US" style="font-size:10.0pt;color:gray;mso-fareast-language:RU">Fintech JSC, Moscow, Russia<o:p></o:p></span></p>
<p class="MsoPlainText"><span lang="EN-US" style="color:black"><o:p> </o:p></span></p>
</div>
</body>
</html>