[dm-devel] [PATCH v2 0/6] dm verity: optionally use tasklets

Milan Broz gmazyland at gmail.com
Wed Jul 27 08:23:37 UTC 2022


On 27/07/2022 01:04, Mike Snitzer wrote:
> On Tue, Jul 26 2022 at  5:44P -0400,
> Milan Broz <gmazyland at gmail.com> wrote:
> 
>> On 26/07/2022 18:09, Mike Snitzer wrote:
>>> Hi,
>>>
>>> Please see this updated patchset that reflects what I've staged for
>>> the 5.20 merge window, see:
>>> https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git/log/?h=dm-5.20
>>>
>>> My only outstanding question, from previous v1 patchset, is: should
>>> the verify_wq be created using WQ_HIGHPRI instead of WQ_CPU_INTENSIVE?
>>> (I doubt it has a significant impact on performance but if you have
>>> insight on why you made that change, and if it meaningful, I'd happily
>>> apply the change).
>>>
>>> I've tested using cryptsetup's testsuite (which has dm-verity tests)
>>> but I haven't tested the "try_verify_in_tasklet" feature.
> 
> I wasn't lying above: I haven't tested the "try_verify_in_tasklet"
> feature.  I just figured I didn't break what Huck posted by cleaning
> it up while reviewing closely ;)

:) What I am trying to avoid is that these patches reach Linux tree
until properly tested even in non-standard configurations (like FEC enabled).

Currently we have not even enough HW for GitLab runners for upstream cryptsetup
CI testing and regression like these will cause *huge* pain for us later.

>> I added new veritysetup option --use-tasklets for testing to a new branch
>> https://gitlab.com/cryptsetup/cryptsetup/-/commits/verify-tasklet
>>
>> I tried to run verity-compat-test (on that branch above), not used the flag yet,
>> just in one simple option flag test (see the branch).
> 
> OK, thanks for doing this work, really appreciate it.  How is it I
> would initiate this test using your "verify-tasklet" branch?

Basically, just checkout that branch, compile it
(autogen.sh, configure, make, make check-programs) and then run verity test
cd tests; ./verity-compat-test

Even without adding the feature, FEC tests are skipped for some reason...
(Check for N/A that should not be there.)

Then you can easily enable "--use-tasklets" for every open, I would just
comment this line:

--- a/src/veritysetup.c
+++ b/src/veritysetup.c
@@ -184,7 +184,7 @@ static int _activate(const char *dm_device,
                 activate_flags |= CRYPT_ACTIVATE_IGNORE_ZERO_BLOCKS;
         if (ARG_SET(OPT_CHECK_AT_MOST_ONCE_ID))
                 activate_flags |= CRYPT_ACTIVATE_CHECK_AT_MOST_ONCE;
-       if (ARG_SET(OPT_USE_TASKLETS_ID))
+//     if (ARG_SET(OPT_USE_TASKLETS_ID))
                 activate_flags |= CRYPT_ACTIVATE_TASKLETS;


compile it, and run the verity-compat-test again.

For me, it stucks on the first in-kernel verify (non-FEC) line.
Some trace below...

(Side note: we do not have any easy way to check that dm-verity
is compiled without FEC. Target version is the same, and I do not
want to introduce any config parsing in libcryptsetup....
The same perhaps applies to other targets, any idea?
I would help us to error the test in that case more clearly.)

Milan

Jul 27 10:10:20 : device-mapper: verity: 7:2: reached maximum errors
Jul 27 10:10:21 : loop1: detected capacity change from 0 to 1094
Jul 27 10:10:21 : device-mapper: verity: sha256 using implementation "sha256-generic"
Jul 27 10:10:22 : loop1: detected capacity change from 0 to 1094
Jul 27 10:10:22 : device-mapper: verity: sha256 using implementation "sha256-generic"
Jul 27 10:15:29 : INFO: task systemd-udevd:6842 blocked for more than 122 seconds.
Jul 27 10:15:29 :       Not tainted 5.19.0-rc8+ #767
Jul 27 10:15:29 : "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 27 10:15:29 : task:systemd-udevd   state:D stack:    0 pid: 6842 ppid:   540 flags:0x00004006
Jul 27 10:15:29 : Call Trace:
Jul 27 10:15:29 :  __schedule+0x3f9/0xcc0
Jul 27 10:15:29 :  schedule+0x40/0xb0
Jul 27 10:15:29 :  io_schedule+0x3c/0x60
Jul 27 10:15:29 :  folio_wait_bit_common+0xe2/0x310
Jul 27 10:15:29 :  ? filemap_invalidate_unlock_two+0x40/0x40
Jul 27 10:15:29 :  __filemap_get_folio+0x4d8/0x500
Jul 27 10:15:29 :  truncate_inode_pages_range+0x1c2/0x4e0
Jul 27 10:15:29 :  ? lockdep_hardirqs_on+0xbf/0x150
Jul 27 10:15:29 :  ? smp_call_function_many_cond+0x2e5/0x310
Jul 27 10:15:29 :  ? on_each_cpu_cond_mask+0x32/0x60
Jul 27 10:15:29 :  ? trace_hardirqs_on+0x35/0xd0
Jul 27 10:15:29 :  ? smp_call_function_many_cond+0xe9/0x310
Jul 27 10:15:29 :  ? buffer_exit_cpu_dead+0x80/0x80
Jul 27 10:15:29 :  ? buffer_exit_cpu_dead+0x80/0x80
Jul 27 10:15:29 :  ? generic_remap_file_range_prep+0xcb0/0xcb0
Jul 27 10:15:29 :  ? on_each_cpu_cond_mask+0x3c/0x60
Jul 27 10:15:29 :  ? generic_remap_file_range_prep+0xcb0/0xcb0
Jul 27 10:15:29 :  truncate_inode_pages+0xc/0x10
Jul 27 10:15:29 :  blkdev_flush_mapping+0x66/0x100
Jul 27 10:15:29 :  blkdev_put_whole+0x38/0x40
Jul 27 10:15:29 :  blkdev_put+0x92/0x1c0
Jul 27 10:15:29 :  blkdev_close+0x13/0x20
Jul 27 10:15:29 :  __fput+0x80/0x270
Jul 27 10:15:29 :  ? lockdep_hardirqs_on+0xbf/0x150
Jul 27 10:15:29 :  ____fput+0x8/0x10
Jul 27 10:15:29 :  task_work_run+0x4f/0x80
Jul 27 10:15:29 :  do_exit+0x315/0x9a0
Jul 27 10:15:29 :  ? lockdep_hardirqs_on+0xbf/0x150
Jul 27 10:15:29 :  do_group_exit+0x26/0x90
Jul 27 10:15:29 :  get_signal+0xa2d/0xa60
Jul 27 10:15:29 :  arch_do_signal_or_restart+0x1e/0x5c0
Jul 27 10:15:29 :  ? __this_cpu_preempt_check+0xf/0x11
Jul 27 10:15:29 :  ? lockdep_hardirqs_on+0xbf/0x150
Jul 27 10:15:29 :  ? exit_to_user_mode_prepare+0x10f/0x250
Jul 27 10:15:29 :  ? syscall_exit_to_user_mode+0x1a/0x50
Jul 27 10:15:29 :  exit_to_user_mode_prepare+0x129/0x250
Jul 27 10:15:29 :  syscall_exit_to_user_mode+0x1a/0x50
Jul 27 10:15:29 :  do_int80_syscall_32+0x3c/0x90
Jul 27 10:15:29 :  entry_INT80_32+0xf0/0xf0
Jul 27 10:15:29 : EIP: 0xb7f61022
Jul 27 10:15:29 : EAX: fffffffc EBX: 00000005 ECX: 0054cbdc EDX: 00000100
Jul 27 10:15:29 : ESI: b7ea8000 EDI: 00570820 EBP: 00570868 ESP: bfc0890c
Jul 27 10:15:29 : DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 007b EFLAGS: 00000246
Jul 27 10:15:29 : \x0aShowing all locks held in the system:
Jul 27 10:15:29 : 1 lock held by khungtaskd/34:
Jul 27 10:15:29 :  #0: c1b0bc98 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x1f/0x152
Jul 27 10:15:29 : 2 locks held by kworker/u16:5/6148:
Jul 27 10:15:29 : 1 lock held by systemd-udevd/6842:
Jul 27 10:15:29 :  #0: c51344bc (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_put+0x30/0x1c0
Jul 27 10:15:29 :
Jul 27 10:15:29 : =============================================\x0a



More information about the dm-devel mailing list