<div>Hi,</div>
<div> I patched the r19 version of multisnap and LVM2.02.64.</div>
<div> But when I type the command like " lvcreate -s -n test_lv_ss1 /dev/test_vg/test_lv", it failed, it seems the system did the command as no sharedstore, outputing the err mesg like: no extents.</div>
<div> I create the origin lv and sharedstore as Daire Byrne did. 'lvs' command can see the test_lv and the 'test_lv--shared'. How to create snapshot?</div>
<div> Thank you very much.</div>
<div> </div>
<div> </div>
<div> </div>
<div>Best regards,</div>
<div>Busby</div>
<div> </div>
<div> <br><br></div>
<div class="gmail_quote">2010/4/16 Daire Byrne <span dir="ltr"><<a href="mailto:daire.byrne@gmail.com">daire.byrne@gmail.com</a>></span><br>
<blockquote style="BORDER-LEFT: #ccc 1px solid; MARGIN: 0px 0px 0px 0.8ex; PADDING-LEFT: 1ex" class="gmail_quote">Hi,<br><br>I had some spare RAID hardware lying around and thought I'd give the<br>new shared snapshots code a whirl. Maybe the results are of interest<br>
so I'm posting them here. I used the "r18" version of the code with<br>2.6.33 and patched lvm2-2.02.54.<br><br>Steps to create test environment:<br><br> # pvcreate /dev/sdb<br> # vgcreate test_vg /dev/sdb<br>
# lvcreate -L 1TB test_vg -n test_lv<br> # mkfs.xfs /dev/test_vg/test_lv<br> # mount /dev/test_vg/test_lv /mnt/images/<br><br> # lvcreate -L 2TB -c 256 --sharedstore mikulas -s /dev/test_vg/test_lv<br> # lvcreate -s -n test_lv_ss1 /dev/test_vg/test_lv<br>
# dd if=/dev/zero of=/mnt/images/dd-file bs=1M count=102400<br> # dd of=/dev/null if=/mnt/images/dd-file bs=1M count=102400<br><br>Raw speeds of the "test_lv" xfs formatted volume without any shared<br>snapshot space allocated was 308 MB/s writes and 214 MB/s reads. I<br>
have done no further tuning.<br><br>No. snaps | type | chunk | writes | reads<br>----------------------------------------------<br> 0 mikulas 4k 225MB/s 127MB/s<br> 1 mikulas 4k 18MB/s 128MB/s<br>
2 mikulas 4k 11MB/s 128MB/s<br> 3 mikulas 4k 11MB/s 127MB/s<br> 4 mikulas 4k 10MB/s 127MB/s<br> 10 mikulas 4k 9MB/s 127MB/s<br><br> 0 mikulas 256k 242MB/s 129MB/s<br>
1 mikulas 256k 38MB/s 130MB/s<br> 2 mikulas 256k 37MB/s 131MB/s<br> 3 mikulas 256k 36MB/s 132MB/s<br> 4 mikulas 256k 33MB/s 129MB/s<br> 10 mikulas 256k 31MB/s 128MB/s<br>
<br> 1 normal 256k 45MB/s 127MB/s<br> 2 normal 256k 18MB/s 128MB/s<br> 3 normal 256k 11MB/s 127MB/s<br> 4 normal 256k 8MB/s 124MB/s<br> 10 normal 256k 3MB/s 126MB/s<br>
<br>I wanted to test the "daniel" store but I got "multisnapshot:<br>Unsupported chunk size" with everything except a chunksize of "16k".<br>Even then the store was created but reported that it was 100% full.<br>
Nevertheless I created a few snapshots but performance didn't seem<br>much different. I have not included the results as I could only use a<br>chunksize of 16k. Also when removing the snapshots I got some kmalloc<br>nastiness (needed to reboot). I think the daniel store is a bit<br>
broken.<br><br>Observations/questions:<br><br> (1) why does performance drop when you create the shared snapshot<br>space but not create any actual snapshots and there is no COW being<br>done? The kmultisnapd eats CPU...<br>
(2) similarly why does the read performance change at all<br>(214->127MB/s). There is no COW overhead. This is the case for both<br>the old snapshots and the new shared ones.<br> (3) when writing why does it write data to the origin quickly in<br>
short bursts (buffer?) but then effectively stall while the COW<br>read/write occurs? Why can you not write to the filesystem<br>asynchronously while the COW is happening? This is the same for the<br>normal/old snapshots too so I guess it is just an inherent limitation<br>
to ensure consistency?<br> (4) why is there a small (but appreciable) drop in writes as the<br>number of snapshots increase? It should only have to do a single COW<br>in all cases no?<br> (5) It takes a really long time (hours) to create a few TB worth of<br>
shared snapshot space when using 4k chunks. Seems much better with<br>256k. The old snapshots create almost instantly.<br><br>All in all it looks very interesting and is currently the best way of<br>implementing shared snapshots for filesystems which don't have native<br>
support for it (e.g. btrfs). I found the zumastor stuff to be rather<br>slow, buggy and difficult to operate in comparison.<br><br>The performance seem to be on par with with the normal/old snapshots<br>and much much better once you increase the number of snapshots. If<br>
only the snapshot performance could be better overall (old and multi)<br>- perhaps there are some further tweaks and tunings I could do?<br><br>Regards,<br><br>Daire<br><font color="#888888"><br>--<br>dm-devel mailing list<br>
<a href="mailto:dm-devel@redhat.com">dm-devel@redhat.com</a><br><a href="https://www.redhat.com/mailman/listinfo/dm-devel" target="_blank">https://www.redhat.com/mailman/listinfo/dm-devel</a><br></font></blockquote></div>
<br>