<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
Hi thanks all for your help,<br>
<br>
I have two raid group, each raid group has two lun, the lun of the
first raid group are for a linux cluster, the lun of the second raid
group are for a windows cluster,<br>
<br>
I installed navicli, this is the output you requested:<br>
<br>
./navicli -h 192.168.2.11 getlun<br>
Statistics logging is disabled.<br>
Certain fields are not printed if statistics<br>
logging is not enabled.<br>
LOGICAL UNIT NUMBER 0<br>
Prefetch size (blocks) = 0<br>
Prefetch multiplier = 4<br>
Segment size (blocks) = 0<br>
Segment multiplier = 4<br>
Maximum prefetch (blocks) = 4096<br>
Prefetch Disable Size (blocks) = 4097<br>
Prefetch idle count = 40<br>
<br>
Variable length prefetching YES<br>
Prefetched data retained YES<br>
<br>
Read cache configured according to<br>
specified parameters.<br>
<br>
Total Hard Errors: 0<br>
Total Soft Errors: 0<br>
Total Queue Length: 0<br>
Name ql<br>
Minimum latency reads N/A<br>
<br>
RAID Type: RAID5<br>
RAIDGroup ID: 0<br>
State: Bound<br>
Stripe Crossing: 0<br>
Element Size: 128<br>
Current owner: SP B<br>
Offset: 0<br>
Auto-trespass: DISABLED<br>
Auto-assign: DISABLED<br>
Write cache: ENABLED<br>
Read cache: ENABLED<br>
Idle Threshold: 0<br>
Idle Delay Time: 20<br>
Write Aside Size: 2048<br>
Default Owner: SP B<br>
Rebuild Priority: ASAP<br>
Verify Priority: ASAP<br>
Prct Reads Forced Flushed: 0<br>
Prct Writes Forced Flushed: 0<br>
Prct Rebuilt: 100<br>
Prct Bound: 100<br>
LUN Capacity(Megabytes): 512000<br>
LUN Capacity(Blocks): 1048576000<br>
UID:
60:06:01:60:86:F0:16:00:BE:5A:CA:73:DB:8C:DA:11<br>
Is Private: NO<br>
Snapshots List: None<br>
MirrorView Name if any: Not Mirrored<br>
<br>
LOGICAL UNIT NUMBER 2<br>
Prefetch size (blocks) = 0<br>
Prefetch multiplier = 4<br>
Segment size (blocks) = 0<br>
Segment multiplier = 4<br>
Maximum prefetch (blocks) = 4096<br>
Prefetch Disable Size (blocks) = 4097<br>
Prefetch idle count = 40<br>
<br>
Variable length prefetching YES<br>
Prefetched data retained YES<br>
<br>
Read cache configured according to<br>
specified parameters.<br>
<br>
Total Hard Errors: 0<br>
Total Soft Errors: 0<br>
Total Queue Length: 0<br>
Name win<br>
Minimum latency reads N/A<br>
<br>
RAID Type: RAID5<br>
RAIDGroup ID: 1<br>
State: Bound<br>
Stripe Crossing: 0<br>
Element Size: 128<br>
Current owner: SP A<br>
Offset: 0<br>
Auto-trespass: DISABLED<br>
Auto-assign: DISABLED<br>
Write cache: ENABLED<br>
Read cache: ENABLED<br>
Idle Threshold: 0<br>
Idle Delay Time: 20<br>
Write Aside Size: 2048<br>
Default Owner: SP A<br>
Rebuild Priority: ASAP<br>
Verify Priority: ASAP<br>
Prct Reads Forced Flushed: 0<br>
Prct Writes Forced Flushed: 0<br>
Prct Rebuilt: 100<br>
Prct Bound: 100<br>
LUN Capacity(Megabytes): 819200<br>
LUN Capacity(Blocks): 1677721600<br>
UID:
60:06:01:60:86:F0:16:00:B6:2F:72:8B:DB:8C:DA:11<br>
Is Private: NO<br>
Snapshots List: None<br>
MirrorView Name if any: Not Mirrored<br>
<br>
LOGICAL UNIT NUMBER 3<br>
Prefetch size (blocks) = 0<br>
Prefetch multiplier = 4<br>
Segment size (blocks) = 0<br>
Segment multiplier = 4<br>
Maximum prefetch (blocks) = 4096<br>
Prefetch Disable Size (blocks) = 4097<br>
Prefetch idle count = 40<br>
<br>
Variable length prefetching YES<br>
Prefetched data retained YES<br>
<br>
Read cache configured according to<br>
specified parameters.<br>
<br>
Total Hard Errors: 0<br>
Total Soft Errors: 0<br>
Total Queue Length: 0<br>
Name quorum<br>
Minimum latency reads N/A<br>
<br>
RAID Type: RAID5<br>
RAIDGroup ID: 1<br>
State: Bound<br>
Stripe Crossing: 0<br>
Element Size: 128<br>
Current owner: SP A<br>
Offset: 0<br>
Auto-trespass: DISABLED<br>
Auto-assign: DISABLED<br>
Write cache: ENABLED<br>
Read cache: ENABLED<br>
Idle Threshold: 0<br>
Idle Delay Time: 20<br>
Write Aside Size: 2048<br>
Default Owner: SP A<br>
Rebuild Priority: ASAP<br>
Verify Priority: ASAP<br>
Prct Reads Forced Flushed: 0<br>
Prct Writes Forced Flushed: 0<br>
Prct Rebuilt: 100<br>
Prct Bound: 100<br>
LUN Capacity(Megabytes): 5120<br>
LUN Capacity(Blocks): 10485760<br>
UID:
60:06:01:60:86:F0:16:00:FC:D3:8B:91:DB:8C:DA:11<br>
Is Private: NO<br>
Snapshots List: None<br>
MirrorView Name if any: Not Mirrored<br>
<br>
LOGICAL UNIT NUMBER 1<br>
Prefetch size (blocks) = 0<br>
Prefetch multiplier = 4<br>
Segment size (blocks) = 0<br>
Segment multiplier = 4<br>
Maximum prefetch (blocks) = 4096<br>
Prefetch Disable Size (blocks) = 4097<br>
Prefetch idle count = 40<br>
<br>
Variable length prefetching YES<br>
Prefetched data retained YES<br>
<br>
Read cache configured according to<br>
specified parameters.<br>
<br>
Total Hard Errors: 0<br>
Total Soft Errors: 0<br>
Total Queue Length: 0<br>
Name mail<br>
Minimum latency reads N/A<br>
<br>
RAID Type: RAID5<br>
RAIDGroup ID: 0<br>
State: Bound<br>
Stripe Crossing: 0<br>
Element Size: 128<br>
Current owner: SP B<br>
Offset: 0<br>
Auto-trespass: DISABLED<br>
Auto-assign: DISABLED<br>
Write cache: ENABLED<br>
Read cache: ENABLED<br>
Idle Threshold: 0<br>
Idle Delay Time: 20<br>
Write Aside Size: 2048<br>
Default Owner: SP B<br>
Rebuild Priority: ASAP<br>
Verify Priority: ASAP<br>
Prct Reads Forced Flushed: 0<br>
Prct Writes Forced Flushed: 0<br>
Prct Rebuilt: 100<br>
Prct Bound: 100<br>
LUN Capacity(Megabytes): 153600<br>
LUN Capacity(Blocks): 314572800<br>
UID:
60:06:01:60:86:F0:16:00:E4:6F:56:7A:DB:8C:DA:11<br>
Is Private: NO<br>
Snapshots List: None<br>
MirrorView Name if any: Not Mirrored<br>
<br>
the output of scsi_id is the same:<br>
<br>
server2 block # scsi_id -g -u -s /block/sdb<br>
36006016086f01600e46f567adb8cda11<br>
server2 block # scsi_id -g -u -s /block/sdc<br>
36006016086f01600e46f567adb8cda11<br>
server2 block # scsi_id -g -u -s /block/sdd<br>
36006016086f01600e46f567adb8cda11<br>
server2 block # scsi_id -g -u -s /block/sde<br>
36006016086f01600e46f567adb8cda11<br>
server2 block # scsi_id -g -u -s /block/sdf<br>
36006016086f01600e46f567adb8cda11<br>
server2 block # scsi_id -g -u -s /block/sdg<br>
36006016086f01600e46f567adb8cda11<br>
server2 block # scsi_id -g -u -s /block/sdh<br>
36006016086f01600e46f567adb8cda11<br>
server2 block # scsi_id -g -u -s /block/sdi<br>
36006016086f01600e46f567adb8cda11<br>
<br>
I can see and write only on the LUN called mail, I'm not able to see ql<br>
<br>
thanks for your help,<br>
<br>
regards<br>
Nicola<br>
<br>
<br>
Bernd Zeimetz ha scritto:
<blockquote cite="mid43D6D16D.1060909@bzed.de" type="cite">
<pre wrap="">Hi,
</pre>
<blockquote type="cite">
<blockquote type="cite">
<pre wrap="">I'm trying to configure a new emc cx storage, I have the following:
I defined two lun on the storage however I'm able to see only one LUN
multipath -l show the following:
multipath -l
mail (36006016086f01600e46f567adb8cda11)
[size=150 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [active]
\_ 1:0:1:0 sdd 8:48 [active][ready]
\_ 1:0:1:1 sde 8:64 [active][ready]
\_ 2:0:1:0 sdh 8:112 [active][ready]
\_ 2:0:1:1 sdi 8:128 [active][ready]
\_ round-robin 0 [enabled]
\_ 2:0:0:0 sdf 8:80 [active][ready]
\_ 1:0:0:0 sdb 8:16 [active][ready]
\_ 1:0:0:1 sdc 8:32 [active][ready]
\_ 2:0:0:1 sdg 8:96 [active][ready]
</pre>
</blockquote>
</blockquote>
<pre wrap=""><!---->did you probably just share one raid group via 2 LUNs? Then the output
of multipath is imho indeed right. For me it shows the setup you've
described - one raidgroup, shared via 2 LUNs (usually one per default on
SP A, one on SP B, attached via 2 paths per SP to a switch, accessed
from your server via 2 HBAs.
If you have access to the Navisphere cli tool - please post the output of
./navicli -h IP_OF_ONE_OF_YOUR_SPs getlun
I think this will show that my hint is right.
Imho you want to create a second raid group and share it via 2 LUNs, too.
</pre>
<blockquote type="cite">
<pre wrap=""><a class="moz-txt-link-freetext" href="http://christophe.varoqui.free.fr/wiki/wakka.php?wiki=TestedEnvironments">http://christophe.varoqui.free.fr/wiki/wakka.php?wiki=TestedEnvironments</a>
</pre>
</blockquote>
<pre wrap=""><!---->I hope there's nothing wring with the stuff I've written about the
EMC/CX in there, if you find anything please let me know - I don't look
in there often.
Hope that helps,
best regards
Bernd
--
dm-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:dm-devel@redhat.com">dm-devel@redhat.com</a>
<a class="moz-txt-link-freetext" href="https://www.redhat.com/mailman/listinfo/dm-devel">https://www.redhat.com/mailman/listinfo/dm-devel</a>
</pre>
</blockquote>
<br>
</body>
</html>