[dm-devel] Needing group_by_target/group_by_node_name

christophe varoqui christophe.varoqui at free.fr
Sun Dec 19 17:52:19 UTC 2004


Le vendredi 17 d?embre 2004 à 17:19 +0000, Arthur Bergman a écrit :
> Hi,
> 
> I got dm-multipath working using multibus and roundrobin against a IBM 
> SVC (San Volume Controller) backed SAN. This is instead of using the 
> IBM supplied SDD that only works under 2.4. The SVC is fronting for a 
> number of different IBM/LSILogic RAID controllers. Everything is 
> working fine and failover works (takes 30 seconds but I haven't tried 
> to actually bring tweak the QLA setting so I expect that to go down).
> 
Yes, testers report good behaviour with qlport_down_retry=1

> The SVC is actually two linux boxes with 4 FC ports each, all ports in 
> the cluster are active and there is no fail over cost for using paths 
> against different nodes in the cluster, we managed to see over 200 
> MB/sec sequential write speeds against it, this is inline with expected 
> performance. However the read speed is much slower than on SDD, between 
> 30% and 70% of expected. After some spec reading it seems like even if 
> there is no fail over penalty for switching between SVC cluster node 
> members there is of course a penalty since each SVC has 8GB of cache. 
> So keeping your IO to one controller is a good idea.
> 
> However the only way I think I can identify which SVC is the target 
> node_name information (/sys/class/fc_transport/target0\:0\:0/node_name 
> ), so I have to look that up somehow, I am currently unable to figure 
> out how from a block device find the target using /sys.
> 
Doesn't the box give usable serials on SCSI INQ ?

The multipath tool already fetches that (you can see the values when
debug is enabled), and proposes to group by these strings.

I guess it could be useful to fetch the target node_name anyway.

> I assume that once I done this all I need to do is write my own 
> priority finding program that looks these up and groups them into a 
> priority depending on the target node_name /
> 
Might work too.

> Thanks for the work on this, we were able to add a second lun, add it 
> to our lvm group and then grow the filesystem on top of it without any 
> downtime!
> 
So you owe me an entry in the TestedEnvironments page in the Wiki :)

regards,
-- 
christophe varoqui <christophe.varoqui at free.fr>





More information about the dm-devel mailing list