[Linux-cluster] CCA Partition Invisible from 2nd Node

Steve Nelson sanelson at gmail.com
Sat Sep 10 13:52:22 UTC 2005


Hello All,

I'm in the process of building an Oracle cluster on RHAS 3.0 // GFS
6.0 using 2 x GL580s, an MSA1000 and a DL380 for the quorum server,
but have encountered what seems to be a problem with the secondary
node seeing the CCA partition, and thus not being able to read the ccs
files, preventing me from starting ccsd on that node.

Here's the setup:

In the MSA 1000 I have created 4 LUNs:

=> controller serialnumber=P56350GX3R004S logicaldrive all show

MSA1000 at SGM0450039

  array A
   logicaldrive 1 (16.9 GB, RAID 1+0, OK)

  array B
   logicaldrive 2 (16.9 GB, RAID 1+0, OK)

  array C
   logicaldrive 3 (16.9 GB, RAID 1+0, OK)

  array D
   logicaldrive 4 (50.8 GB, RAID 5, OK)

My partitioning scheme is as follows:

/dev/sda1 - 100M (raw partitions used by cluster)
/dev/sdb1 - likewise

/dev/sda2 - 100M (CCA partition for GFS)
/dev/sdb2 - likewise

/dev/sda3
/dev/sdb3 - the rest - data partition

/dev/sdc
/dev/sdd - all - data partition.

I can see these partitions with fdisk on both nodes:

D
isk /dev/sda: 18.2 GB, 18207375360 bytes
/dev/sda1             1        13    104391   83  Linux
/dev/sda2            14        26    104422+  83  Linux
/dev/sda3            27      2213  17567077+  83  Linux
Disk /dev/sdb: 18.2 GB, 18207375360 bytes
/dev/sdb1             1        13    104391   83  Linux
/dev/sdb2            14        26    104422+  83  Linux
/dev/sdb3            27      2213  17567077+  83  Linux
Disk /dev/sdc: 18.2 GB, 18207375360 bytes
Disk /dev/sdd: 54.6 GB, 54622126080 bytes

My pool config files are as below:

# more *cfg
::::::::::::::
digex_cca.cfg
::::::::::::::
poolname digex_cca
subpools 1
subpool 0 0 2
pooldevice 0 0 /dev/sda2
pooldevice 0 1 /dev/sdb2
::::::::::::::
gfs0.cfg
::::::::::::::
poolname gfs0
subpools 1
subpool 0 128 2
pooldevice 0 0 /dev/sda3
pooldevice 0 1 /dev/sdb3
::::::::::::::
gfs1.cfg
::::::::::::::
poolname gfs1
subpools 1
subpool 0 128 2
pooldevice 0 0 /dev/sdc
pooldevice 0 1 /dev/sdd

Having run pool_assemble -a on both nodes, I wrote my ccs files, and
created the cluster archive.

On node 1 I see:

[root at primary]/etc/gfs#  pool_info 
Major Minor Name      Alias          Capacity  In use   MP Type   MP Stripe
254      65 digex_cca /dev/poolbn      417632     YES        none      
254      66 gfs0      /dev/poolbo    70268160      NO        none      
254      67 gfs1      /dev/poolbp    71122432      NO        none      

[root at primary]/etc/gfs# ls -l /dev/pool
total 0
brw-------    2 root     root     254,  65 Sep  9 16:53 digex_cca
brw-------    2 root     root     254,  66 Sep  9 16:53 gfs0
brw-------    2 root     root     254,  67 Sep  9 16:53 gfs1

But on node 2 I see:

[root at secondary]~# pool_info
Major Minor Name Alias          Capacity  In use   MP Type   MP Stripe
254      65 gfs1 /dev/poolbn    71122432      NO        none      

[root at secondary]~# ls -l /dev/pool
total 0
brw-------    2 root     root     254,  65 Sep  9 16:53 gfs1

Consequently when I try to restart ccsd on the secondary node, it
looks for the ccs files in the location specified in
/etc/sysconfig/gfs (which doesn't exist).

Notwithstanding the oddness of sdc and sdd being different sizes -
this can be re-organised - I am concerned that the second node can't
see the CCA partition, and am loath to simply copy the ccs files to
the local machine.

I also note that gfs1 as seen on node 2 has the same alias and
major/minor as cca, but the same dimensions as the gfs1 seen by node
1.  This suggests to me either a multipathing problem, or a
configuration error.

I am not happy to continue with gfs_mkfs on /dev/pool/gfs[01] at this
stage, and would like some advice on why I can't see/access the cca
partition.

I'd appreciate your thoughts and advice on how to continue!

Thanks a lot!

Steve Nelson




More information about the Linux-cluster mailing list