[dm-devel] deterministic io throughput in multipath

Benjamin Marzinski bmarzins at redhat.com
Tue Jan 17 01:04:47 UTC 2017


On Mon, Jan 16, 2017 at 11:19:19AM +0000, Muneendra Kumar M wrote:
>    Hi Ben,
>    After the below discussion we  came with the approach which will meet our
>    requirement.
>    I have attached the patch which is working good in our field tests.
>    Could you please review the attached patch and provide us your valuable
>    comments .

I can see a number of issues with this patch.

First, some nit-picks:
- I assume "dis_reinstante_time" should be "dis_reinstate_time"

- The indenting in check_path_validity_err is wrong, which made it
  confusing until I noticed that

if (clock_gettime(CLOCK_MONOTONIC, &start_time) != 0)

  doesn't have an open brace, and shouldn't indent the rest of the
  function.

- You call clock_gettime in check_path, but never use the result.

- In dict.c, instead of writing your own functions that are the same as
  the *_delay_checks functions, you could make those functions generic
  and use them for both.  To go match the other generic function names
  they would probably be something like

set_off_int_undef

print_off_int_undef

  You would also need to change DELAY_CHECKS_* and ERR_CHECKS_* to
  point to some common enum that you created, the way
  user_friendly_names_states (to name one of many) does. The generic
  enum used by *_off_int_undef would be something like.

enum no_undef {
	NU_NO = -1,
	NU_UNDEF = 0,
}

  The idea is to try to cut down on the number of functions that are
  simply copy-pasting other functions in dict.c.


Those are all minor cleanup issues, but there are some bigger problems.

Instead of checking if san_path_err_threshold,
san_path_err_threshold_window, and san_path_err_recovery_time are
greater than zero seperately, you should probably check them all at the
start of check_path_validity_err, and return 0 unless they all are set.
Right now, if a user sets san_path_err_threshold and
san_path_err_threshold_window but not san_path_err_recovery_time, their
path will never recover after it hits the error threshold.  I pretty
sure that you don't mean to permanently disable the paths.


time_t is a signed type, which means that if you get the clock time in
update_multpath and then fail to get the clock time in
check_path_validity_err, this check:

start_time.tv_sec - pp->failure_start_time) < pp->mpp->san_path_err_threshold_window

will always be true.  I realize that clock_gettime is very unlikely to
fail.  But if it does, probably the safest thing to so is to just
immediately return 0 in check_path_validity_err.


The way you set path_failures in update_multipath may not get you what
you want.  It will only count path failures found by the kernel, and not
the path checker.  If the check_path finds the error, pp->state will be
set to PATH_DOWN before pp->dmstate is set to PSTATE_FAILED. That means
you will not increment path_failures. Perhaps this is what you want, but
I would assume that you would want to count every time the path goes
down regardless of whether multipathd or the kernel noticed it.


I'm not super enthusiastic about how the san_path_err_threshold_window
works.  First, it starts counting from when the path goes down, so if
the path takes long enough to get restored, and then fails immediately,
it can just keep failing and it will never hit the
san_path_err_threshold_window, since it spends so much of that time with
the path failed.  Also, the window gets set on the first error, and
never reset until the number of errors is over the threshold.  This
means that if you get one early error and then a bunch of errors much
later, you will go for (2 x san_path_err_threshold) - 1 errors until you
stop reinstating the path, because of the window reset in the middle of
the string of errors.  It seems like a better idea would be to have
check_path_validity_err reset path_failures as soon as it notices that
you are past san_path_err_threshold_window, instead of waiting till the
number of errors hits san_path_err_threshold.


If I was going to design this, I think I would have
san_path_err_threshold and san_path_err_recovery_time like you do, but
instead of having a san_path_err_threshold_window, I would have
something like san_path_err_forget_rate.  The idea is that every
san_path_err_forget_rate number of successful path checks you decrement
path_failures by 1. This means that there is no window after which you
reset.  If the path failures come in faster than the forget rate, you
will eventually hit the error threshold. This also has the benefit of
easily not counting time when the path was down as time where the path
wasn't having problems. But if you don't like my idea, yours will
work fine with some polish.

-Ben


>    Below are the files that has been changed .
>     
>    libmultipath/config.c      |  3 +++
>    libmultipath/config.h      |  9 +++++++++
>    libmultipath/configure.c   |  3 +++
>    libmultipath/defaults.h    |  1 +
>    libmultipath/dict.c             | 80
>    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>    libmultipath/dict.h        |  1 +
>    libmultipath/propsel.c     | 44
>    ++++++++++++++++++++++++++++++++++++++++++++
>    libmultipath/propsel.h     |  6 ++++++
>    libmultipath/structs.h     | 12 +++++++++++-
>    libmultipath/structs_vec.c | 10 ++++++++++
>    multipath/multipath.conf.5 | 58
>    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>    multipathd/main.c          | 61
>    +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++--
>     
>    We have added three new config parameters whose description is below.
>    1.san_path_err_threshold:
>            If set to a value greater than 0, multipathd will watch paths and
>    check how many times a path has been failed due to errors. If the number
>    of failures on a particular path is greater then the
>    san_path_err_threshold then the path will not  reinstate  till
>    san_path_err_recovery_time. These path failures should occur within a
>    san_path_err_threshold_window time frame, if not we will consider the path
>    is good enough to reinstate.
>     
>    2.san_path_err_threshold_window:
>            If set to a value greater than 0, multipathd will check whether
>    the path failures has exceeded  the san_path_err_threshold within this
>    time frame i.e san_path_err_threshold_window . If so we will not reinstate
>    the path till          san_path_err_recovery_time.
>     
>    3.san_path_err_recovery_time:
>    If set to a value greater than 0, multipathd will make sure that when path
>    failures has exceeded the san_path_err_threshold within
>    san_path_err_threshold_window then the path  will be placed in failed
>    state for san_path_err_recovery_time duration. Once
>    san_path_err_recovery_time has timeout  we will reinstate the failed path
>    .
>     
>    Regards,
>    Muneendra.
>     
>    -----Original Message-----
>    From: Muneendra Kumar M
>    Sent: Wednesday, January 04, 2017 6:56 PM
>    To: 'Benjamin Marzinski' <bmarzins at redhat.com>
>    Cc: dm-devel at redhat.com
>    Subject: RE: [dm-devel] deterministic io throughput in multipath
>     
>    Hi Ben,
>    Thanks for the information.
>     
>    Regards,
>    Muneendra.
>     
>    -----Original Message-----
>    From: Benjamin Marzinski [[1]mailto:bmarzins at redhat.com]
>    Sent: Tuesday, January 03, 2017 10:42 PM
>    To: Muneendra Kumar M <[2]mmandala at Brocade.com>
>    Cc: [3]dm-devel at redhat.com
>    Subject: Re: [dm-devel] deterministic io throughput in multipath
>     
>    On Mon, Dec 26, 2016 at 09:42:48AM +0000, Muneendra Kumar M wrote:
>    > Hi Ben,
>    >
>    > If there are two paths on a dm-1 say sda and sdb as below.
>    >
>    > #  multipath -ll
>    >        mpathd (3600110d001ee7f0102050001cc0b6751) dm-1 SANBlaze,VLUN
>    MyLun
>    >        size=8.0M features='0' hwhandler='0' wp=rw
>    >        `-+- policy='round-robin 0' prio=50 status=active
>    >          |- 8:0:1:0  sda 8:48 active ready  running
>    >          `- 9:0:1:0  sdb 8:64 active ready  running         
>    >
>    > And on sda if iam seeing lot of errors due to which the sda path is
>    fluctuating from failed state to active state and vicevera.
>    >
>    > My requirement is something like this if sda is failed for more then 5
>    > times in a hour duration ,then I want to keep the sda in failed state
>    > for few hours (3hrs)
>    >
>    > And the data should travel only thorugh sdb path.
>    > Will this be possible with the below parameters.
>     
>    No. delay_watch_checks sets how may path checks you watch a path that has
>    recently come back from the failed state. If the path fails again within
>    this time, multipath device delays it.  This means that the delay is
>    always trigger by two failures within the time limit.  It's possible to
>    adapt this to count numbers of failures, and act after a certain number
>    within a certain timeframe, but it would take a bit more work.
>     
>    delay_wait_checks doesn't guarantee that it will delay for any set length
>    of time.  Instead, it sets the number of consecutive successful path
>    checks that must occur before the path is usable again. You could set this
>    for 3 hours of path checks, but if a check failed during this time, you
>    would restart the 3 hours over again.
>     
>    -Ben
>     
>    > Can you just let me know what values I should add for delay_watch_checks
>    and delay_wait_checks.
>    >
>    > Regards,
>    > Muneendra.
>    >
>    >
>    >
>    > -----Original Message-----
>    > From: Muneendra Kumar M
>    > Sent: Thursday, December 22, 2016 11:10 AM
>    > To: 'Benjamin Marzinski' <[4]bmarzins at redhat.com>
>    > Cc: [5]dm-devel at redhat.com
>    > Subject: RE: [dm-devel] deterministic io throughput in multipath
>    >
>    > Hi Ben,
>    >
>    > Thanks for the reply.
>    > I will look into this parameters will do the internal testing and let
>    you know the results.
>    >
>    > Regards,
>    > Muneendra.
>    >
>    > -----Original Message-----
>    > From: Benjamin Marzinski [[6]mailto:bmarzins at redhat.com]
>    > Sent: Wednesday, December 21, 2016 9:40 PM
>    > To: Muneendra Kumar M <[7]mmandala at Brocade.com>
>    > Cc: [8]dm-devel at redhat.com
>    > Subject: Re: [dm-devel] deterministic io throughput in multipath
>    >
>    > Have you looked into the delay_watch_checks and delay_wait_checks
>    configuration parameters?  The idea behind them is to minimize the use of
>    paths that are intermittently failing.
>    >
>    > -Ben
>    >
>    > On Mon, Dec 19, 2016 at 11:50:36AM +0000, Muneendra Kumar M wrote:
>    > >    Customers using Linux host (mostly RHEL host) using a SAN network
>    for
>    > >    block storage, complain the Linux multipath stack is not resilient
>    to
>    > >    handle non-deterministic storage network behaviors. This has caused
>    many
>    > >    customer move away to non-linux based servers. The intent of the
>    below
>    > >    patch and the prevailing issues are given below. With the below
>    design we
>    > >    are seeing the Linux multipath stack becoming resilient to such
>    network
>    > >    issues. We hope by getting this patch accepted will help in more
>    Linux
>    > >    server adoption that use SAN network.
>    > >
>    > >    I have already sent the design details to the community in a
>    different
>    > >    mail chain and the details are available in the below link.
>    > >
>    > >   
>    [1][9]https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=DgIDAw&c=IL_XqQWOjubgfqINi2jTzg&r=E3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=vfwpVp6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=q5OI-lfefNC2CHKmyUkokgiyiPo_Uj7MRu52hG3MKzM&e=
>    .
>    > >
>    > >    Can you please go through the design and send the comments to us.
>    > >
>    > >     
>    > >
>    > >    Regards,
>    > >
>    > >    Muneendra.
>    > >
>    > >     
>    > >
>    > >     
>    > >
>    > > References
>    > >
>    > >    Visible links
>    > >    1.
>    > >
>    [10]https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_
>    > > ar
>    > > chives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=DgIDAw&c=IL_XqQWOj
>    > > ub
>    > > gfqINi2jTzg&r=E3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=vfwpVp6e
>    > > 1K
>    > > XtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=q5OI-lfefNC2CHKmyUkokgiyiPo_Uj7M
>    > > Ru
>    > > 52hG3MKzM&e=
>    >
>    > > --
>    > > dm-devel mailing list
>    > > [11]dm-devel at redhat.com
>    > >
>    [12]https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_
>    > > ma
>    > > ilman_listinfo_dm-2Ddevel&d=DgIDAw&c=IL_XqQWOjubgfqINi2jTzg&r=E3ftc4
>    > > 7B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=vfwpVp6e1KXtRA0ctwHYJ7cDmPsL
>    > > i2C1L9pox7uexsY&s=UyE46dXOrNTbPz_TVGtpoHl3J3h_n0uYhI4TI-PgyWg&e=
>     
> 
> References
> 
>    Visible links
>    1. mailto:bmarzins at redhat.com
>    2. mailto:mmandala at brocade.com
>    3. mailto:dm-devel at redhat.com
>    4. mailto:bmarzins at redhat.com
>    5. mailto:dm-devel at redhat.com
>    6. mailto:bmarzins at redhat.com
>    7. mailto:mmandala at brocade.com
>    8. mailto:dm-devel at redhat.com
>    9. https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_archives_dm-2Ddevel_2016-2DDecember_msg00122.html&d=DgIDAw&c=IL_XqQWOjubgfqINi2jTzg&r=E3ftc47B6BGtZ4fVaYvkuv19wKvC_Mc6nhXaA1sBIP0&m=vfwpVp6e1KXtRA0ctwHYJ7cDmPsLi2C1L9pox7uexsY&s=q5OI-lfefNC2CHKmyUkokgiyiPo_Uj7MRu52hG3MKzM&e
>   10. https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_
>   11. mailto:dm-devel at redhat.com
>   12. https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_






More information about the dm-devel mailing list