[Linux-cluster] question about creating partitions and gfs
Jason
jason at monsterjam.org
Fri May 12 01:51:49 UTC 2006
ok, so reading the docs and your example, they reference /dev/sdb1
this is still the 10 meg partition that i create with fdisk.. right?
then what about the rest of the disk? do I need to reference it as a pooldevice as well?
i.e.
/dev/sdb1 <-10 meg partition
/dev/sdb2 <--- rest of logical disk ??
Jason
On Thu, May 11, 2006 at 07:16:14AM -0400, Kovacs, Corey J. wrote:
> Jason, the docs should run through the creation of the pool devices. They can
> be
> a bit of a labrynth though, so here is an example called "pool_cca.cfg".
>
>
> <----cut here---->
> poolname pool_cca #name of the pool/volume to create
> subpools 1 #how many subpools make up this
> pool/volume (always starts as 1)
> subpool 0 128 1 gfs_data #first subpool, zero indexed, 128k stripe, 1
> devices
> pooldevice 0 0 /dev/sdb1 #physical device for pool 0, device 0 (again,
> zero indexed)
> <-end cut here -->
>
> Additional pools just need a different "poolname" and "pooldevice".
>
> NOTE, the cluster nodes need to be "seeing" the devices listed as pooldevices
> the same
> way. node1 sees the second physical disk as /dev/sdb, then third as /dev/sdc
> and so on.
>
>
> Now, if you make /dev/sdb1 about 10MB, you'll have enough space to create a
> cluster
> config pool. Then to actually use it, you need to do the following...
>
> pool_tool -c pool_cca.cfg
>
> then you can issue ...
>
> service pool start
>
> on all nodes. Just make sure all nodes have a clean view of the partition
> table (reboot, or issue partprobe).
>
> Once you have the cca pool created and activated, you can apply the cluster
> config
> to it...
>
> ccs_tool create /path/to/configs/ /dev/pool/pool_cca
>
> Then do a "service ccsd start" on all nodes followed by "service lock_gulmd
> start"
> on all nodes..
>
> To check to see if things are working...do...
>
> gulm_tool nodelist nameofalockserver
>
> and you should see a list of your nodes and some info about each one.
>
> That's should be enough to get you started. to add storage for actual gfs
> filesystems, simply
> create more pools. you can also expand pools by adding subpools after
> creation. It's sort of
> a poor mans volume management if you will. It can be done to a running system
> and the filesystem
> on top of it can be expaned live as well.
>
>
> Anyway, hope this helps...
>
>
> Corey
>
>
> -----Original Message-----
> From: linux-cluster-bounces at redhat.com
> [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Jason
> Sent: Wednesday, May 10, 2006 8:53 PM
> To: linux clustering
> Subject: Re: [Linux-cluster] question about creating partitions and gfs
>
> ummm I was thinking that was the answer too, but I have no idea what the
> "pool" device is..
> how can I tell?
>
> Jason
>
>
> On Wed, May 10, 2006 at 08:33:04AM -0400, Kovacs, Corey J. wrote:
> > Jason, I just realized what the problem is. You need to apply the
> > config to a "pool"
> > not a normal device. What do your pooll definitions look like? The
> > one you created for the config is where you need to point ccs_tool at
> > to activate the config...
> >
> >
> > Corey
> >
> > -----Original Message-----
> > From: linux-cluster-bounces at redhat.com
> > [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Kovacs, Corey J.
> > Sent: Wednesday, May 10, 2006 8:31 AM
> > To: linux clustering
> > Subject: RE: [Linux-cluster] question about creating partitions and
> > gfs
> >
> > Jason, couple of questions.... (And I assume you are working with
> > RHEL3+GFS6.0x)
> >
> >
> > 1. Are you actually using raw devices? if so, why?
> > 2. Does the device /dev/raw/raw64 actually exist on tf2?
> >
> >
> > GFS does not use raw devices for anything. The standard Redhat Cluster
> > suite does, but not GFS. GFS uses "storage pools". Also, if memory
> > servs me right, later versions of GFS for RHEL3 need to be told what
> > pools to use in the "/etc/sysconfig/gfs" config file. Used to be that
> > GFS just did a scan and "found" the pools, but no longer I believe.
> >
> > Hope this helps. If not, can you give more details about your config?
> >
> >
> >
> > Corey
> >
> >
> > -----Original Message-----
> > From: linux-cluster-bounces at redhat.com
> > [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Jason
> > Sent: Tuesday, May 09, 2006 8:23 PM
> > To: linux clustering
> > Subject: Re: [Linux-cluster] question about creating partitions and
> > gfs
> >
> > yes, both boxes are connected to the storage, its a dell powervault
> > 220S configured for cluster mode.
> >
> > [root at tf1 cluster]# fdisk -l /dev/sdb
> >
> > Disk /dev/sdb: 146.5 GB, 146548981760 bytes
> > 255 heads, 63 sectors/track, 17816 cylinders Units = cylinders of
> > 16065 * 512 = 8225280 bytes
> >
> > Device Boot Start End Blocks Id System
> > /dev/sdb1 1 2433 19543041 83 Linux
> > [root at tf1 cluster]#
> >
> > [root at tf2 cluster]# fdisk -l /dev/sdb
> >
> > Disk /dev/sdb: 146.5 GB, 146548981760 bytes
> > 255 heads, 63 sectors/track, 17816 cylinders Units = cylinders of
> > 16065 * 512 = 8225280 bytes
> >
> > Device Boot Start End Blocks Id System
> > /dev/sdb1 1 2433 19543041 83 Linux
> > [root at tf2 cluster]#
> >
> >
> > so both sides see the storage.
> >
> > on tf1, I can start ccsd fine, but on tf2, I cant, and I see May 8
> > 22:00:21
> > tf2 ccsd: Unable to open /dev/sdb1 (/dev/raw/raw64): No such device or
> > address May 8 22:00:21 tf2 ccsd: startup failed May 9 20:17:21 tf2 ccsd:
> > Unable to open /dev/sdb1 (/dev/raw/raw64): No such device or address
> > May 9
> > 20:17:21 tf2 ccsd: startup failed May 9 20:17:30 tf2 ccsd: Unable to
> > open
> > /dev/sdb1 (/dev/raw/raw64): No such device or address May 9 20:17:30
> > tf2
> > ccsd: startup failed
> > [root at tf2 cluster]#
> >
> > in the logs
> >
> > Jason
> >
> >
> >
> >
> > On Tue, May 09, 2006 at 08:16:07AM -0400, Kovacs, Corey J. wrote:
> > > Jason, if IIRC, the dells internal disks show up as /dev/sd* devices.
> > > Do you have a shared storage device? If /dev/sdb1 is not a shared
> > > device, then I think you might need to take a step back and get a
> > > hold of a SAN of some type. If you are just playing around, there
> > > are ways to get some firewire drives to accept
> > >
> > > two hosts and act like a cheap shared devices. There are docs on the
> > > Oracle site documenting the process of setting up the drive and the
> > > kernel. Note, that you'll only be able to use two nodes using the
> > > firewire idea.
> > >
> > > Also, you should specify a partition for the command below. That
> > > partition can be very small. Something on the order of 10MB sounds
> > > right. Even that is probably way too big. Then use the rest for GFS
> > > storage pools.
> > >
> > >
> > > Corey
> > >
> > > -----Original Message-----
> > > From: linux-cluster-bounces at redhat.com
> > > [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Jason
> > > Sent: Monday, May 08, 2006 9:32 PM
> > > To: linux-cluster at redhat.com
> > > Subject: [Linux-cluster] question about creating partitions and gfs
> > >
> > > so still following instructions at
> > > http://www.gyrate.org/archives/9
> > > im at the part that says
> > >
> > > "# ccs_tool create /root/cluster /dev/iscsi/bus0/target0/lun0/part1"
> > >
> > > in my config, I have the dell PERC 4/DC cards, and I believe the
> > > logical drive showed up as /dev/sdb
> > >
> > > so do I need to create a partition on this logical drive with fdisk
> > > first before I run
> > >
> > > ccs_tool create /root/cluster /dev/sdb1
> > >
> > > or am I totally off track here?
> > >
> > > i did ccs_tool create /root/cluster /dev/sdb and it seemed to work
> > > fine, but doesnt seem right..
> > >
> > > Jason
> > >
> > > --
> > > Linux-cluster mailing list
> > > Linux-cluster at redhat.com
> > > https://www.redhat.com/mailman/listinfo/linux-cluster
> > >
> > > --
> > > Linux-cluster mailing list
> > > Linux-cluster at redhat.com
> > > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> > --
> > ================================================
> > | Jason Welsh jason at monsterjam.org |
> > | http://monsterjam.org DSS PGP: 0x5E30CC98 |
> > | gpg key: http://monsterjam.org/gpg/ |
> > ================================================
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
>
> --
> ================================================
> | Jason Welsh jason at monsterjam.org |
> | http://monsterjam.org DSS PGP: 0x5E30CC98 |
> | gpg key: http://monsterjam.org/gpg/ |
> ================================================
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
--
================================================
| Jason Welsh jason at monsterjam.org |
| http://monsterjam.org DSS PGP: 0x5E30CC98 |
| gpg key: http://monsterjam.org/gpg/ |
================================================
More information about the Linux-cluster
mailing list