<font size=2 face="Times New Roman">Hello Hannes:</font>
<br>
<br><font size=2 face="Times New Roman">Since the received uevent messages
store in the queue, and the speed uevent messages received is </font>
<br><font size=2 face="Times New Roman">much faster than the speed uevent
messages processed, so, we can merge these queued uevent </font>
<br><font size=2 face="Times New Roman">message first, and then process
it in the next step. Of course, some paths’ uevent messages of </font>
<br><font size=2 face="Times New Roman">multipath device may not be received
yet, but we do not need to wait for it, since we can deal with </font>
<br><font size=2 face="Times New Roman">the left paths in the original
way when we received uevent messages of these paths later. </font>
<br>
<br><font size=2 face="Times New Roman">I think we can merger most of uevent
messages and reduce most of unnecessary DM change </font>
<br><font size=2 face="Times New Roman">uevent messages during creation/deletion
of multipath devices by this way.</font>
<br>
<div><font size=2 face="Times New Roman">The method you mentioned I think
that it is a little complex, and it not reduce the DM </font>
<br><font size=2 face="Times New Roman">addition/change/deletion uevent
messages which consumed system high resource.</font>
<br>
<br><font size=2 face="Times New Roman">Sincerely</font>
<br><font size=2 face="Times New Roman">Tang</font>
<br>
<br>
<br><tt><font size=2>On 11/16/2016 02:46 AM, tang.junhui@zte.com.cn wrote:<br>
> In these scenarios, multipath processing efficiency is very low:<br>
> 1) There are many paths in multipath devices,<br>
> 2) Add/delete devices when iSCSI login/logout or FC link up/down.<br>
> <br>
> Multipath process so slowly that it is not satisfied some applications,<br>
> For example, openstack is often timeout in waiting for the creation
of<br>
> multipath devices.<br>
> <br>
> The reason of the low processing efficiency I think is that multipath<br>
> process uevent message one by one, and each one also produce a new
dm<br>
> addition/change/deletion uevent message to increased system resource<br>
> consumption, actually most of these uevent message have no sense at
all.<br>
> <br>
> So, can we find out a way to reduce these uevent messages to promote<br>
> multipath processing efficiency? Personally, I think we can merge<br>
> these uevent messages before processing. For example, during the<br>
> 4 iSCSI session login procedures, we can wait for a moment until<br>
> the addition uevent messages of 4 paths are all arrived, and then
we can<br>
> merge these uevent messages to one and deal with it at once. The way
to<br>
> deal with deletion uevent messages is the same.<br>
> <br>
> How do you think about? Any points of view are welcome.<br>
<br>
The problem is that we don't know beforehand on how many uevents we<br>
should be waiting for.<br>
And even if we do there still would be a chance of one or several of<br>
these uevents failing to setup the device, so we would be waiting<br>
forever here.<br>
<br>
The one possible way out would be to modify the way we're handling<br>
events internally. Event processing really are several steps:<br>
1) Getting information about the attached device (pathinfo() and friends)<br>
2) Store the information in our pathvec<br>
3) Identify and update the mpp structure with the new pathvecs<br>
Currently, we're processing each step for every uevent.<br>
As we have only a single lock protecting both, pathvec and mppvec, we<br>
have to take the lock prior to setup 2 and release it after step 3.<br>
So while we could receive events in parallel, we can only process them<br>
one-by-one, with every event having to re-do step 3.<br>
<br>
The idea would be to split off single lock into a pathvec lock and an<br>
mppvec lock, and create a separate thread for updating mppvec.<br>
<br>
Then event processing can be streamlined by having the uevent thread<br>
adding the new device to the pathvec and notifying the mppvec thread.<br>
This thread could then check if an pathvec update is in progress, and<br>
only start once this pathvec handling has finished.<br>
With this we would avoid having to issue several similar mppvec updates,<br>
and the entire handling would be far smoother.<br>
<br>
Cheers,<br>
<br>
Hannes<br>
-- <br>
Dr. Hannes Reinecke
Teamlead Storage & Networking<br>
hare@suse.de
+49 911 74053 688<br>
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg<br>
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton<br>
HRB 21284 (AG Nürnberg)<br>
<br>
--<br>
dm-devel mailing list<br>
dm-devel@redhat.com<br>
</font></tt><a href="https://www.redhat.com/mailman/listinfo/dm-devel"><tt><font size=2>https://www.redhat.com/mailman/listinfo/dm-devel</font></tt></a><tt><font size=2><br>
</font></tt>
<br></div>