[dm-devel] multipath-tools: prevent unnecessary reinstate,

Zhangguanghui zhang.guanghui at h3c.com
Thu Mar 3 08:19:26 UTC 2016


Thanks a lots , the problem has been solved , the multipath-tool deb building leads to the dm-mpath.rules lost.

________________________________
zhangguanghui

发件人: zhangguanghui 10102 (CCPL)<mailto:zhang.guanghui at h3c.com>
发送时间: 2016-03-03 09:51
收件人: Nalla, Ravikanth<mailto:ravikanth.nalla at hpe.com>; dm-devel-bounces at redhat.com<mailto:dm-devel-bounces at redhat.com>; dm-devel at redhat.com<mailto:dm-devel at redhat.com>
主题: Re: RE: multipath-tools: prevent unnecessary reinstate,
Thanks a lots , The chekerloop thread of multipathd daemon will be monitoring all the paths periodically via check_path.
As far as I know,  each tur cmd takes a 5s interval in default,  a previously failed path may be automatically reinstated.
According to the log, it looks to be a endless loop(reinstate_path and fail_path frequently switch in 1 second),
only in the case of  prio alua and two path groups.  when the case of prio const and one path group is normal.
so I don't know what is that.

________________________________
zhangguanghui

发件人: Nalla, Ravikanth<mailto:ravikanth.nalla at hpe.com>
发送时间: 2016-03-02 21:05
收件人: zhangguanghui 10102 (CCPL)<mailto:zhang.guanghui at h3c.com>; dm-devel-bounces at redhat.com<mailto:dm-devel-bounces at redhat.com>; dm-devel at redhat.com<mailto:dm-devel at redhat.com>
主题: RE: multipath-tools: prevent unnecessary reinstate,
1. When all paths fail, reinstate_path and fail_path frequently switch, so a previously failed path is automatically reinstated,
    Could someone tell me why is that?  moreover,  dm_reinstate_path not being called in multipathd.
dm_reinstate_path is indeed initiated by multipathd when path current status from the cheker module is found to be PATH_UP / PATH_GHOST(online but path unusable) which is not same as previous status.

The chekerloop thread of multipathd daemon will be monitoring all the paths periodically via check_path, where it initiates fail_path, re-instate  etc.,
messages to device mapper based on the current paths status (PATH_DOWN/ PATH_UP/PATH_GHOST/ PATH_SHAKY etc.,).


-          As a first step, check_path initiates path_offline  to check path status to see if it is offline/online from sysfs.  If the path is offline(PATH_DOWN) or PATH_SHAKY and different from previous status it initiates fail_path msg to device mapper. If path is online (PATHUP) it initiates respective checker (in this case tur) to ping and check the new status of the path.

o   If check_path founds from the tur checker module that the new path status is PATH_UP (tur cmd succeeds) and PATH_GHOST (LOGICAL UNIT NOT ACCESSIBLE , TARGET PORT IN STANDBY STATE) and not same as previous status  then it initiate reinstate_path msg to device mapper.

o   If check_path founds from the tur checker module that the new path status is PATH_DOWN (tur cmd fails) or PATH_SHAKY(not used in tur checker anyway) and not same as previous status then it initiate fail_path msg to device mapper.


In the below situation it looks to be the case where the paths are not-stable/flakey resulting in reinstate_path/ fail_path by the multipathd at each tur cmd failure or success.

Mar 2 16:06:50 cvknode129 kernel: [68389.817587] sd 3:0:0:0: rejecting I/O to offline device
Mar 2 16:06:50 cvknode129 kernel: [68389.817598] device-mapper: multipath: Failing path 8:16. 3483
Mar 2 16:06:50 cvknode129 kernel: [68389.817601] CPU: 0 PID: 3 Comm: ksoftirqd/0 Tainted: G OE 4.1.0-generic #2
Mar 2 16:06:50 cvknode129 kernel: [68389.817602] Hardware name: HP ProLiant BL460c Gen8, BIOS I31 02/25/2012
Mar 2 16:06:50 cvknode129 kernel: [68389.817603] ffff880816673500 ffff88081be8bd08 ffffffff817e8924 0000000000000007
Mar 2 16:06:50 cvknode129 kernel: [68389.817605] ffff880816673528 ffff88081be8bd38 ffffffffc019c75e 00000000fffffffb
Mar 2 16:06:50 cvknode129 kernel: [68389.817607] ffff88003698b458 ffff880816673500 0000000000000010 ffff88081be8bd88
Mar 2 16:06:50 cvknode129 kernel: [68389.817609] Call Trace:
Mar 2 16:06:50 cvknode129 kernel: [68389.817612] [<ffffffff817e8924>] dump_stack+0x45/0x57
Mar 2 16:06:50 cvknode129 kernel: [68389.817622] [<ffffffffc019c75e>] fail_path+0x7e/0xf0 [dm_multipath]
Mar 2 16:06:50 cvknode129 kernel: [68389.817624] [<ffffffffc019d60e>] multipath_end_io+0x5e/0x190 [dm_multipath]
Mar 2 16:06:50 cvknode129 kernel: [68389.817627] [<ffffffff81020e19>] ? sched_clock+0x9/0x10
Mar 2 16:06:50 cvknode129 kernel: [68389.817629] [<ffffffff81666cf3>] dm_softirq_done+0xd3/0x250
Mar 2 16:06:50 cvknode129 kernel: [68389.817642] [<ffffffff81015686>] ? __switch_to+0x1e6/0x580
Mar 2 16:06:50 cvknode129 kernel: [68389.817645] [<ffffffff81397d9b>] blk_done_softirq+0x7b/0x90
Mar 2 16:06:50 cvknode129 kernel: [68389.817647] [<ffffffff810820fe>] __do_softirq+0xde/0x2d0
Mar 2 16:06:50 cvknode129 kernel: [68389.817649] [<ffffffff81082310>] run_ksoftirqd+0x20/0x60
Mar 2 16:06:50 cvknode129 kernel: [68389.817650] [<ffffffff810a0fd6>] smpboot_thread_fn+0x116/0x170
Mar 2 16:06:50 cvknode129 kernel: [68389.817652] [<ffffffff810a0ec0>] ? sort_range+0x30/0x30
Mar 2 16:06:50 cvknode129 kernel: [68389.817653] [<ffffffff8109db59>] kthread+0xc9/0xe0
Mar 2 16:06:50 cvknode129 kernel: [68389.817655] [<ffffffff8109da90>] ? flush_kthread_worker+0x90/0x90
Mar 2 16:06:50 cvknode129 kernel: [68389.817657] [<ffffffff817f0ca2>] ret_from_fork+0x42/0x70
Mar 2 16:06:50 cvknode129 kernel: [68389.817658] [<ffffffff8109da90>] ? flush_kthread_worker+0x90/0x90
Mar 2 16:06:50 cvknode129 kernel: [68389.843090] CPU: 3 PID: 43112 Comm: multipath Tainted: G OE 4.1.0-generic #2
Mar 2 16:06:50 cvknode129 kernel: [68389.843092] Hardware name: HP ProLiant BL460c Gen8, BIOS I31 02/25/2012
Mar 2 16:06:50 cvknode129 kernel: [68389.843093] ffff880816673528 ffff8807efcdfb78 ffffffff817e8924 ffff88081952e580
Mar 2 16:06:50 cvknode129 kernel: [68389.843095] 0000000000000000 ffff8807efcdfbc8 ffffffffc019e20e 0000000000000000
Mar 2 16:06:50 cvknode129 kernel: [68389.843097] ffff880036a81440 0000000000000008 ffff88081a288e78 ffffc90006d49040
Mar 2 16:06:50 cvknode129 kernel: [68389.843098] Call Trace:
Mar 2 16:06:50 cvknode129 kernel: [68389.843101] [<ffffffff817e8924>] dump_stack+0x45/0x57
Mar 2 16:06:50 cvknode129 kernel: [68389.843106] [<ffffffffc019e20e>] reinstate_path+0xae/0x1a0 [dm_multipath]
Mar 2 16:06:50 cvknode129 kernel: [68389.843108] [<ffffffffc019e160>] ? multipath_map+0x20/0x20 [dm_multipath]
Mar 2 16:06:50 cvknode129 kernel: [68389.843113] [<ffffffffc019d9ce>] multipath_message+0x18e/0x360 [dm_multipath]
Mar 2 16:06:50 cvknode129 kernel: [68389.843115] [<ffffffff8166dbc5>] target_message+0x255/0x340
Mar 2 16:06:50 cvknode129 kernel: [68389.843116] [<ffffffff8166d970>] ? __dev_status+0x150/0x150
Mar 2 16:06:50 cvknode129 kernel: [68389.843118] [<ffffffff8166ef7a>] ctl_ioctl+0x24a/0x520
Mar 2 16:06:50 cvknode129 kernel: [68389.843120] [<ffffffff8166f263>] dm_ctl_ioctl+0x13/0x20
Mar 2 16:06:50 cvknode129 kernel: [68389.843122] [<ffffffff812142d6>] do_vfs_ioctl+0x86/0x530
Mar 2 16:06:50 cvknode129 kernel: [68389.843124] [<ffffffff8106a10f>] ? __do_page_fault+0x1af/0x470
Mar 2 16:06:50 cvknode129 kernel: [68389.843126] [<ffffffff81214811>] SyS_ioctl+0x91/0xb0
Mar 2 16:06:50 cvknode129 kernel: [68389.843128] [<ffffffff817f0872>] system_call_fastpath+0x16/0x75
Mar 2 16:06:50 cvknode129 kernel: [68389.843255] sd 5:0:0:0: rejecting I/O to offline device
Mar 2 16:06:50 cvknode129 kernel: [68389.843266] device-mapper: multipath: Failing path 8:48. 3514



device {
vendor "MacroSAN"
product "LU"
path_grouping_policy "group_by_prio"
path_checker "tur"
features "1 queue_if_no_path"
hardware_handler "0"
prio "alua"
failback 15
rr_weight "priorities"
no_path_retry 30
rr_min_io 1000
}

3600b34223c09334d0467dfcd5d0000da dm-0 MacroSAN,LU
size=50G features='0' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 5:0:0:0 sdd 8:48 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
`- 3:0:0:0 sdb 8:16 active ready running

Thanks!

Best regards!

________________________________
zhangguanghui
-------------------------------------------------------------------------------------------------------------------------------------
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments contain confidential information from H3C, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/dm-devel/attachments/20160303/fece4690/attachment.htm>


More information about the dm-devel mailing list