[Linux-cluster] Problem with SAN after migrating to RH cluster suite

Robert Peterson rpeterso at redhat.com
Mon Mar 19 17:42:14 UTC 2007


Hartin, Brian wrote:
> Hello all,
> 
> I'm relatively new to Linux, so forgive me if this question seems off.
> 
> We recently moved from a cluster running RHEL 4/Veritas to a new cluster
> running RHEL 4/Red Hat Cluster Suite.  In both cases, a SAN was
> involved.
> 
> After migrating, we see a considerable increase in the time it takes to
> mount the SAN.  Some of our init.d scripts fail because the SAN is not
> up yet.  Our admin tried changing run levels to make the scripts run
> later, but this doesn't help.  One can even log in via SSH shortly after
> boot and the SAN is not yet mounted.  Could this be normal behavior?
> When a service needs access to files on the SAN should it be started by
> some cluster mechanism?  Or should we be looking for some underlying
> problem?
> 
> Incidentally, the files on the SAN are not config files, they are data.
> All config files are on local disk.
> 
> Thanks for any help,
> 
> B
Hi Brian,

I'm not quite sure I understand what the problem is.  Under ordinary
circumstances, there should be no extra time required as far as I know.
If you have all of your cluster startup scripts in place and chkconfiged
on, then I think you should be able to mount immediately.
What init scripts are failing because of the SAN and what do they say
when they fail?

In theory, you should be able to have all of these init scripts turned 
"on" so they run at init time:

ccsd
cman
fenced

If you have GFS mount points in your /etc/inittab, you may also want to
enable:

gfs

(You can also check rgmanager if you're using rgmanager failover services
for High Availability).

You can check these by doing this command:

chkconfig --list | grep "ccsd\|cman\|fenced\|gfs"

So a gfs file system should be able to mount when the system is booting.
I don't recommend messing with the order of the scripts though.
I hope this helps.

Regards,

Bob Peterson
Red Hat Cluster Suite




More information about the Linux-cluster mailing list