[Linux-cluster] question about creating partitions and gfs

Jason jason at monsterjam.org
Thu May 11 00:53:21 UTC 2006


ummm I was thinking that was the answer too, but I have no idea what the "pool" device is..
how can I tell?

Jason


On Wed, May 10, 2006 at 08:33:04AM -0400, Kovacs, Corey J. wrote:
> Jason, I just realized what the problem is. You need to apply the config to a
> "pool"
> not a normal device.  What do your pooll definitions look like? The one you
> created
> for the config is where you need to point ccs_tool at to activate the
> config...
> 
> 
> Corey 
> 
> -----Original Message-----
> From: linux-cluster-bounces at redhat.com
> [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Kovacs, Corey J.
> Sent: Wednesday, May 10, 2006 8:31 AM
> To: linux clustering
> Subject: RE: [Linux-cluster] question about creating partitions and gfs
> 
> Jason, couple of questions.... (And I assume you are working with
> RHEL3+GFS6.0x)
> 
> 
> 1. Are you actually using raw devices? if so, why? 
> 2. Does the device /dev/raw/raw64 actually exist on tf2?
> 
> 
> GFS does not use raw devices for anything. The standard Redhat Cluster suite
> does, but not GFS. GFS uses "storage pools".  Also, if memory servs me right,
> later versions of GFS for RHEL3 need to be told what pools to use in the
> "/etc/sysconfig/gfs" config file. Used to be that GFS just did a scan and
> "found" the pools, but no longer I believe.
> 
> Hope this helps. If not, can you give more details about your config? 
> 
> 
> 
> Corey
> 
> 
> -----Original Message-----
> From: linux-cluster-bounces at redhat.com
> [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Jason
> Sent: Tuesday, May 09, 2006 8:23 PM
> To: linux clustering
> Subject: Re: [Linux-cluster] question about creating partitions and gfs
> 
> yes, both boxes are connected to the storage, its a dell powervault 220S
> configured for cluster mode. 
> 
> [root at tf1 cluster]#  fdisk -l /dev/sdb
> 
> Disk /dev/sdb: 146.5 GB, 146548981760 bytes
> 255 heads, 63 sectors/track, 17816 cylinders Units = cylinders of 16065 * 512
> = 8225280 bytes
> 
>    Device Boot    Start       End    Blocks   Id  System
> /dev/sdb1             1      2433  19543041   83  Linux
> [root at tf1 cluster]# 
> 
> [root at tf2 cluster]# fdisk -l /dev/sdb
> 
> Disk /dev/sdb: 146.5 GB, 146548981760 bytes
> 255 heads, 63 sectors/track, 17816 cylinders Units = cylinders of 16065 * 512
> = 8225280 bytes
> 
>    Device Boot    Start       End    Blocks   Id  System
> /dev/sdb1             1      2433  19543041   83  Linux
> [root at tf2 cluster]# 
> 
> 
> so both sides see the storage.  
> 
> on tf1, I can start ccsd fine, but on tf2, I cant, and I see May  8 22:00:21
> tf2 ccsd: Unable to open /dev/sdb1 (/dev/raw/raw64): No such device or
> address May  8 22:00:21 tf2 ccsd: startup failed May  9 20:17:21 tf2 ccsd:
> Unable to open /dev/sdb1 (/dev/raw/raw64): No such device or address May  9
> 20:17:21 tf2 ccsd: startup failed May  9 20:17:30 tf2 ccsd: Unable to open
> /dev/sdb1 (/dev/raw/raw64): No such device or address May  9 20:17:30 tf2
> ccsd: startup failed
> [root at tf2 cluster]# 
> 
> in the logs
> 
> Jason
> 
> 
> 
> 
> On Tue, May 09, 2006 at 08:16:07AM -0400, Kovacs, Corey J. wrote:
> > Jason, if IIRC, the dells internal disks show up as /dev/sd* devices. 
> > Do you have a shared storage device? If /dev/sdb1 is not a shared 
> > device, then I think you might need to take a step back and get a hold 
> > of a SAN of some type. If you are just playing around, there are ways 
> > to get some firewire drives to accept
> > 
> > two hosts and act like a cheap shared devices. There are docs on the 
> > Oracle site documenting the process of setting up the drive and the 
> > kernel. Note, that you'll only be able to use two nodes using the 
> > firewire idea.
> > 
> > Also, you should specify a partition for the command below. That 
> > partition can be very small. Something on the order of 10MB sounds 
> > right. Even that is probably way too big. Then use the rest for GFS 
> > storage pools.
> > 
> > 
> > Corey
> > 
> > -----Original Message-----
> > From: linux-cluster-bounces at redhat.com 
> > [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Jason
> > Sent: Monday, May 08, 2006 9:32 PM
> > To: linux-cluster at redhat.com
> > Subject: [Linux-cluster] question about creating partitions and gfs
> > 
> > so still following instructions at
> > http://www.gyrate.org/archives/9
> > im at the part that says
> > 
> > "# ccs_tool create /root/cluster /dev/iscsi/bus0/target0/lun0/part1"
> > 
> > in my config, I have the dell PERC 4/DC cards, and I believe the 
> > logical drive showed up as /dev/sdb
> > 
> > so do I need to create a partition on this logical drive with fdisk 
> > first before I run
> > 
> >  ccs_tool create /root/cluster  /dev/sdb1
> > 
> > or am I totally off track here?
> > 
> > i did ccs_tool create /root/cluster /dev/sdb and it seemed to work 
> > fine, but doesnt seem right..
> > 
> > Jason
> > 
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> > 
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> 
> --
> ================================================
> |    Jason Welsh   jason at monsterjam.org        |
> | http://monsterjam.org    DSS PGP: 0x5E30CC98 |
> |    gpg key: http://monsterjam.org/gpg/       |
> ================================================
> 
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
> 
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
> 
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster

-- 
================================================
|    Jason Welsh   jason at monsterjam.org        |
| http://monsterjam.org    DSS PGP: 0x5E30CC98 |
|    gpg key: http://monsterjam.org/gpg/       |
================================================




More information about the Linux-cluster mailing list