[Linux-cluster] FC Fabric and GFS

Robert Peterson rpeterso at redhat.com
Sat Oct 21 04:16:54 UTC 2006


isplist at logicore.net wrote:
> Is there some weird problem with McData FC switched and GFS? I finally got my 
> cluster working (***finally***) and now I can't see any of the storage.
>
> The McData is a fabric switch. I'm not sure what info someone would need in 
> order to give me any feedback so will reply with what ever I am asked for.
>
> This mess has been down for over a week now. Trying to get this McData ED-5000 
> switch working with everything has been nearly enough to make me quit 
> computers for good.
>
> Please, ANY help in trying to get my storage seen from GFS would be wonderful. 
> If I'm on the wrong list, perhaps you can point me to another one.
>
> Mike
>   
Hi Mike,

I don't know how helpful this will be because I'm not sure exactly what 
you mean by
it not seeing your storage (there are many layers you could be referring 
to),
and I'm not familiar with McData, however:

The first step is to insert the device driver for your Host Bus Adapter and
see if it reports any problems, or if it reports the hardware is there 
in dmesg.
Here's what mine looks like for a QLogic 2400 HBA:

QLogic Fibre Channel HBA Driver
ACPI: PCI interrupt 0000:01:00.0[A] -> GSI 16 (level, low) -> IRQ 169
qla2400 0000:01:00.0: Found an ISP2432, irq 169, iobase 0xffffff0000014000
qla2400 0000:01:00.0: Configuring PCI space...
PCI: Setting latency timer of device 0000:01:00.0 to 64
qla2400 0000:01:00.0: Configure NVRAM parameters...
qla2400 0000:01:00.0: Verifying loaded RISC code...
qla2400 0000:01:00.0: Allocated (1061 KB) for firmware dump...
qla2400 0000:01:00.0: Waiting for LIP to complete...
qla2400 0000:01:00.0: LIP reset occured (f8f7).
qla2400 0000:01:00.0: LIP occured (f8f7).
qla2400 0000:01:00.0: LIP reset occured (f7f7).
qla2400 0000:01:00.0: LOOP UP detected (4 Gbps).
qla2400 0000:01:00.0: Topology - (F_Port), Host Loop address 0x0
scsi2 : qla2xxx
qla2400 0000:01:00.0:
 QLogic Fibre Channel HBA Driver: 8.01.04-d7
  QLogic QLE2460 - PCI-Express to 4Gb FC, Single Channel
  ISP2432: PCIe (2.5Gb/s x4) @ 0000:01:00.0 hdma+, host#=2, fw=4.00.18 [IP]
  Vendor: WINSYS    Model: SA3482            Rev: 347B
  Type:   Direct-Access                      ANSI SCSI revision: 03
qla2400 0000:01:00.0: scsi(2:0:0:0): Enabled tagged queuing, queue depth 32.
SCSI device sdc: 2342664192 512-byte hdwr sectors (1199444 MB)
SCSI device sdc: drive cache: write back
SCSI device sdc: 2342664192 512-byte hdwr sectors (1199444 MB)
SCSI device sdc: drive cache: write back
 sdc: unknown partition table
Attached scsi disk sdc at scsi2, channel 0, id 0, lun 0

 From these messages, you can tell that the SAN is working, and is
recognized as /dev/sdc by the kernel.
If it complains about lip errors and such, it could be your cable or 
connectors.
If the HBA seems to work, but doesn't see the SAN, you may need to
configure the fabric, and I have no idea how to do that on McData.
I've only watched someone else do it--in a hurry--and on a Tornado,
and that was using a web interface.

Next, cat /proc/partitions to make sure your SAN shows up there.
In my case, I see a line that looks like this:
   8    32 1171332096 sdc

Third, make sure your /etc/lvm/lvm.conf isn't filtering out the device,
thereby making it invisible as far as lvm is concerned.  A line like this:
filter = [ "r/sdc/", "r/disk/", "a/.*/" ]
would make /dev/sdc invisible to lvm.

Next, check your /etc/lvm/lvm.conf for locking_type = 2.
You'll need that to share access to the SAN on the cluster.

Next, make sure your clvmd service is started.  If not, do:
service clvmd start (and perhaps chkconfig clvmd on).

Next, try a vgchange -aly and vgscan to get clvmd to look again for it,
although the clvmd service should take care of this.

Next, use the pvdisplay command to see if lvm recognizes the
physical volumes.  If you haven't done this yet, you'll probably
want to create one, like this:
pvcreate /dev/sdc

Use vgs to see if your volume group was found.  If you haven't
created one yet, you may need to create one with vgcreate.
vgcreate mikes_vg /dev/sdc

Finally, use lvs to make sure it sees your logical volume.
If you haven't created one, do something like:
lvcreate -L 39G mikes_vg

I hope this was helpful.  If not, post which part you're stuck on.

Regards,

Bob Peterson
Red Hat Cluster Suite




More information about the Linux-cluster mailing list