[Cluster-devel] [PATCH dlm-tool 13/14] dlm_controld: plock log lock state

Alexander Aring aahringo at redhat.com
Fri Mar 3 22:31:02 UTC 2023


Hi,

On Fri, Mar 3, 2023 at 10:52 AM Andreas Gruenbacher <agruenba at redhat.com> wrote:
>
> Now, let me get to the core of the matter.  We've been talking about
> using user-space (SDT) trace points for collecting the data, and I still
> think that that's what we should do instead of introducing a new
> dlm_controld log file.  In the dlm_controld code, this would look like
> the below patch.
>
> Note that <sys/sdt.h> is part of the systemtap-sdt-devel package, so a
> "BuildRequires: systemtap-sdt-devel" dependency will be needed in
> dlm.spec.
>

ah, ok. This answers my other question I had.

> With that, we can use standard tools like perf, bpftrace, etc. for
> collecting all the relevant information without any further
> modifications to dlm_controld.  We can also collect additional kernel
> and user-space trace point data at the same time with very little
> additional effort.
>
> For example, here is how to register the four plock dlm_controld trace
> points in perf:
>
>   for ev in \
>       sdt_dlm_controld:plock_lock_begin \
>       sdt_dlm_controld:plock_lock_end \
>       sdt_dlm_controld:plock_wait_begin \
>       sdt_dlm_controld:plock_wait_end; do \
>     perf probe -x /usr/sbin/dlm_controld $ev; \
>   done
>
> The events can then be recorded with "perf record":
>
>   perf record \
>     -e sdt_dlm_controld:plock_lock_begin \
>     -e sdt_dlm_controld:plock_lock_end \
>     -e sdt_dlm_controld:plock_wait_begin \
>     -e sdt_dlm_controld:plock_wait_end \
>     [...]
>
> We've already gone through how the resulting log can be processed with
> "perf script".  One possible result would be the log file format that
> lockdb_plot expects, but there are countless other possibilities.
>
> Other useful "tricks":
>
>   $ bpftrace -l 'usdt:/usr/sbin/dlm_controld:*'
>
>   $ readelf -n /usr/sbin/dlm_controld | sed -ne '/\.note\.stapsdt/,/^$/p'
>

look easy enough.

- Alex



More information about the Cluster-devel mailing list