[Libguestfs] [PATCH 2/4] test-md-and-lvm-devices: work around RAID0 regression in Linux v3.14/v5.4

Laszlo Ersek lersek at redhat.com
Mon Sep 20 05:23:33 UTC 2021


The "test-md-and-lvm-devices" test case creates, among other things, a
RAID0 array (md127) that spans two *differently sized* block devices
(sda1: 20MB, lv0: 16MB).

In Linux v3.14, the layout of such arrays was changed incompatibly and
undetectably. If an array were created with a pre-v3.14 kernel and
assembled on a v3.14+ kernel, or vice versa, data could be corrupted.

In Linux v5.4, a mitigation was added, requiring the user to specify the
layout version of such RAID0 arrays explicitly, as a module parameter. If
the user fails to specify a layout version, the v5.4+ kernel refuses to
assemble such arrays. This is why "test-md-and-lvm-devices" currently
fails, with any v5.4+ appliance kernel.

Until we implement a more general solution (see the bugzilla link below),
work around the issue by sizing sda1 and lv0 identically. For this,
increase the size of sdb1 to 24MB: when one 4MB extent is spent on LVM
metadata, the resultant lv0 size (20MB) will precisely match the size of
sda1.

This workaround only affects sizes, and does not interfere with the
original purpose of this test case, which is to test various *stackings*
between disk partitions, software RAID (md), and LVM logical volumes.

Related: https://bugzilla.redhat.com/show_bug.cgi?id=2005485
Signed-off-by: Laszlo Ersek <lersek at redhat.com>
---
 tests/md/test-md-and-lvm-devices.sh | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/tests/md/test-md-and-lvm-devices.sh b/tests/md/test-md-and-lvm-devices.sh
index 5e82e3a4ff69..54f1c3bfb3f5 100755
--- a/tests/md/test-md-and-lvm-devices.sh
+++ b/tests/md/test-md-and-lvm-devices.sh
@@ -53,7 +53,7 @@ trap cleanup INT QUIT TERM EXIT
 # sda2: 20M PV (vg1)
 # sda3: 20M MD (md125)
 #
-# sdb1: 20M PV (vg0)
+# sdb1: 24M PV (vg0) [*]
 # sdb2: 20M PV (vg2)
 # sdb3: 20M MD (md125)
 #
@@ -66,6 +66,9 @@ trap cleanup INT QUIT TERM EXIT
 # vg3   : VG (md125)
 # lv3   : LV (vg3)
 #
+# [*] The reason for making sdb1 4M larger than sda1 is that the LVM metadata
+# will consume one 4MB extent, and we need lv0 to offer exactly as much space
+# as sda1 does, for combining them in md127. Refer to RHBZ#2005485.
 
 guestfish <<EOF
 # Add 2 empty disks
@@ -79,9 +82,10 @@ part-add /dev/sda p 64 41023
 part-add /dev/sda p 41024 81983
 part-add /dev/sda p 81984 122943
 part-init /dev/sdb mbr
-part-add /dev/sdb p 64 41023
-part-add /dev/sdb p 41024 81983
-part-add /dev/sdb p 81984 122943
+part-add /dev/sdb p 64 49215
+part-add /dev/sdb p 49216 90175
+part-add /dev/sdb p 90176 131135
+
 
 # Create volume group and logical volume on sdb1
 pvcreate /dev/sdb1
-- 
2.19.1.3.g30247aa5d201




More information about the Libguestfs mailing list