[Libguestfs] [V2V PATCH 0/5] Bring support for virtio-scsi back to Windows

Denis V. Lunev den at virtuozzo.com
Fri Feb 24 11:55:09 UTC 2023


On 2/24/23 05:56, Laszlo Ersek wrote:
> On 2/23/23 12:48, Denis V. Lunev wrote:
>> On 2/23/23 11:43, Laszlo Ersek wrote:
>>> On 2/22/23 19:20, Andrey Drobyshev wrote:
>>>> Since commits b28cd1dc ("Remove requested_guestcaps / rcaps"), f0afc439
>>>> ("Remove guestcaps_block_type Virtio_SCSI") support for installing
>>>> virtio-scsi driver is missing in virt-v2v.  AFAIU plans and demands for
>>>> bringing this feature back have been out there for a while.  E.g. I've
>>>> found a corresponding issue which is still open [1].
>>>>
>>>> The code in b28cd1dc, f0afc439 was removed due to removing the old
>>>> in-place
>>>> support.  However, having the new in-place support present and bringing
>>>> this same code (partially) back with several additions and improvements,
>>>> I'm able to successfully convert and boot a Win guest with a virtio-scsi
>>>> disk controller.  So please consider the following implementation of
>>>> this feature.
>>>>
>>>> [1] https://github.com/libguestfs/virt-v2v/issues/12
>>> (Preamble: I'm 100% deferring to Rich on this, so take my comments for
>>> what they are worth.)
>>>
>>> In my opinion, the argument made is weak. This cover letter does not say
>>> "why" -- it does not explain why virtio-blk is insufficient for
>>> *Virtuozzo*.
>>>
>>> Second, reference [1] -- issue #12 -- doesn't sound too convincing. It
>>> writes, "opinionated qemu-based VMs that exclusively use UEFI and only
>>> virtio devices". "Opinionated" is the key word there. They're entitled
>>> to an opinion, they're not entitled to others conforming to their
>>> opinion. I happen to be opinionated as well, and I hold the opposite
>>> view.
>>>
>>> (BTW even if they insist on UEFI + virtio, which I do sympathize with,
>>> requiring virtio-scsi exclusively is hard to sell. In particular,
>>> virtio-blk nowadays has support for trim/discard, so the main killer
>>> feature of virtio-scsi is no longer unique to virtio-scsi. Virtio-blk is
>>> also simpler code and arguably faster. Note: I don't want to convince
>>> anyone about what *they* support, just pointing out that virt-v2v
>>> outputting solely virtio-blk disks is entirely fine, as far as
>>> virt-v2v's mission is concerned -- "salvage 'pet' (not 'cattle') VMs
>>> from proprietary hypervisors, and make sure they boot". Virtio-blk is
>>> sufficient for booting, further tweaks are up to the admin (again,
>>> virt-v2v is not for mass/cattle conversions). The "Hetzner Cloud" is not
>>> a particular output module of virt-v2v, so I don't know why virt-v2v's
>>> mission should extend to making the converted VM bootable on "Hetzner
>>> Cloud".)
>>>
>>> Rich has recently added tools for working with the virtio devices in
>>> windows guests; maybe those can be employed as extra (manual) steps
>>> before or after the conversion.
>>>
>>> Third, the last patch in the series is overreaching IMO; it switches the
>>> default. That causes a behavior change for such conversions that have
>>> been working well, and have been thoroughly tested. It doesn't just add
>>> a new use case, it throws away an existent use case for the new one's
>>> sake, IIUC. I don't like that.
>>>
>>> Again -- fully deferring to Rich on the final verdict (and the review).
>>>
>>> Laszlo
>> OK. Let me clarify the situation a bit.
>>
>> These patches (for sure) originates from good old 2017 year, when
>> VirtIO BLK was completely unacceptable to us due to missed
>> discard feature which is now in.
>>
>> Thus you are completely right about the default and default
>> changing (if that will happen) should be in the separate patch.
>> Anyway, from the first glance it should not be needed.
>>
>> Normally, in inplace mode, which we are mostly worrying
>> about v2v should bring the guest configuration in sync
>> with what is written in domain.xml and that does not involve
>> any defaults.
>>
>> VirtIO SCSI should be supported as users should have
>> a freedom to choose between VirtIO SCSI and VirtIO BLK
>> even after the guest installation.
>>
>> Does this sounds acceptable?
> I've got zero experience with in-place conversions. I've skimmed
> <https://libguestfs.org/virt-v2v-in-place.1.html> now, but the use case
> continues to elude me.
>
> What is in-place conversion good for? If you already have a libvirt
> domain XML (i.e., one *not* output by virt-v2v as the result of a
> conversion from a foreign hypervisor), what do you need
> virt-v2v-in-place for?
>
> My understanding is that virt-v2v produces both an output disk (set) and
> a domain description (be it a QEMU cmdline, a libvirt domain XML, an
> OVF, ...), *and* that these two kinds of output belong together, there
> is not one without the other. What's the data flow with inplace conversion?
>
> Laszlo
>
We use v2v as guest convertor engine and prepare VM configuration
ourselves. This looks more appropriate for us as we have different
constraints under different conditions.

This makes sense outside of foreign hypervisor as we could change
bus of the disk and then call v2v to teach the guest to boot from
new location. This was revealed very useful to fix some strange
issues on the customer's side.

That is it.

Den

P.S. Resent (original mail was accidentally sent off-list)



More information about the Libguestfs mailing list