<br><br><div class="gmail_quote">On Tue, Jan 25, 2011 at 10:53 AM, Miklos Vajna <span dir="ltr"><<a href="mailto:vmiklos@frugalware.org">vmiklos@frugalware.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div><div></div><div class="h5">Hi,<br>
<br>
I got the possibility to work on dm-mirror as a final year project at<br>
ULX, the Hungarian distributor of Red Hat.<br>
<br>
Get my feet wet, I created two small patches:<br>
<br>
1) dm-mirror: allow setting ios count to 0<br>
<br>
Always read from the default_mirror in that case.<br>
<br>
2) dm-mirror: allow setting the default mirror<br>
<br>
These can help in case one data leg of a mirror is a remote (iSCSI) one,<br>
so the default RR aproach is not optimal for reading. (One may set the<br>
ios count to 0, set the default mirror to the local one, and that will<br>
cause a read speedup.)<br>
<br>
I do not yet have the right to send those patches (I do this in<br>
university time, so the copyright is not mine), but I hope to be able to<br>
do so - to get them reviewed.<br>
<br>
So the final year project's aim is to improve "the fault tolerance and<br>
performance of dm-mirror". We (I and my mentors) have two ideas in that<br>
area, not counting the above patches:<br>
<br>
1) Make the currently hardwired RR read approach modular, that would<br>
allow implementing for example a weighted RR algorithm. (In case one<br>
disk is two times faster than the other one, etc.)<br>
<br>
2) From our experiments, it seems that in case the dm-mirror looses one<br>
of its legs and there is a write to the mirror, it gets converted to a<br>
linear volume. It would be nice (not sure how easy) to use the mirror<br>
log to mark the dirty blocks, so that the volume would not be converted<br>
to a linear one: once the other leg is back, it could be updated based<br>
on the mirror log.<br>
<br>
The question: what do you (who have much more experience with dm-mirror<br>
than me) think - are these reasonable goals? If not, what would you<br>
improve/change/add/remove to the above list?<br>
<br>
Thanks,<br>
<br>
Miklos<br>
<br>
</div></div>PS: I'm not subscribed, please keep me in CC.<br>
<br>--<br>
dm-devel mailing list<br>
<a href="mailto:dm-devel@redhat.com">dm-devel@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/dm-devel" target="_blank">https://www.redhat.com/mailman/listinfo/dm-devel</a><br></blockquote></div><br><br>hello all,<br>There is one doubt regarding the dm-raid1. As far as i know, in dm-raid1 the data is written parallelly on all the mirrors of mirrorset and if any of the mirror fails to write the data then dm-mirror adds this mirror to fail list by increasing the error count in "fail mirror" function in dm-raid1. <br>
Actually my doubt is where this error count is decremented? i.e after kcpyd or before and where exactly this error count is decremented?<br><br><br>Regards,<br>Nishant.<br>