[Linux-cluster] EFI in CLVM

Paras pradhan pradhanparas at gmail.com
Fri Aug 19 14:56:07 UTC 2011


On Fri, Aug 19, 2011 at 1:04 AM, Jonathan Barber
<jonathan.barber at gmail.com> wrote:
> On 18 August 2011 18:41, Paras pradhan <pradhanparas at gmail.com> wrote:
>> On Thu, Aug 18, 2011 at 10:13 AM, Jonathan Barber
>> <jonathan.barber at gmail.com> wrote:
>>>
>>> On 13 August 2011 04:24, Paras pradhan <pradhanparas at gmail.com> wrote:
>>> > Alan,
>>> > Its a FC SAN.
>
> [snip]
>
>>> > If I don't make an entire LUN a PV, I think I would then need partitions. Am
>>> > i right? and you think this will reduce the speed penalty?
>
> [snip]
>
>>> You can also just not use any partitions/LVM and write the filesystem
>>> directly to the block device... But I'd just stick with using LVM.
>>>
>>
>>
>> Here is what I have noticed though I should have done few more tests.
>> iozone o/p with partitions (test size is 100MB)
>> -
>> "Output is in Kbytes/sec"
>> "  Initial write "  265074.94
>> "        Rewrite "  909962.61
>> "           Read " 1872247.78
>> "        Re-read " 1905471.81
>> "   Reverse Read " 1316265.03
>> "    Stride read " 1448626.44
>> "    Random read " 1119532.25
>> " Mixed workload "  922532.31
>> "   Random write "  749795.80
>> --
>>
>> without partitions:
>> "Output is in Kbytes/sec"
>> "  Initial write "  376417.97
>> "        Rewrite "  870409.73
>> "           Read " 1953878.50
>> "        Re-read " 1984553.84
>> "   Reverse Read " 1353943.00
>> "    Stride read " 1469878.76
>> "    Random read " 1432870.66
>> " Mixed workload " 1328300.78
>> "   Random write "  790309.01
>> ---
>
> I'm not very familiar with iozone, but if you're only reading /
> writing 100M, then probably all you're measuring is the speed of the
> linux buffer cache. You should increase the amount of data to greater
> than the RAM available to the system. Also, you should repeat these
> runs multiple times and at a minimum take an average (and calculate
> the standard deviation) of each metric to make sure you aren't getting
> unusually good/bad performance. You can then compare the results using
> a paired T-test to see if the difference is statistically significant.
>
> [snip]
>
>> I got this locking problem resolved after rebooting all the nodes .
>
> That sounds like the problem encountered in the link I sent before.
>
>> What I have noticed is after adding a LUN, under /dev/mpath instead of
>> wwid i was seeing as:
>>
>> lrwxrwxrwx 1 root root 8 Aug 9 17:30 mpath13 -> ../dm-28
>>
>> After reboot
>>
>> lrwxrwxrwx 1 root root 7 Aug 15 17:53
>> 360060e8004770d000000770d000003e9 -> ../dm-9
>
> That's odd. Did you change your multipath configuration? It looks like
> you've set "user_friendly_names" to "no".


No. I have "yes" to user_friendly_names . I hav't change anything to
multipath.conf however I can see user friendly names in multipath -ll

-
mpath13 (360060e8004770d000000770d000003e9) dm-9 HITACHI,OPEN-V*4
[size=2.0T][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=2][active]
 \_ 5:0:1:7 sdl 8:176 [active][ready]
 \_ 6:0:1:7 sdu 65:64 [active][ready]
-

Paras.


>
>> Thanks
>> Paras.
> --
> Jonathan Barber <jonathan.barber at gmail.com>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>




More information about the Linux-cluster mailing list