[Linux-cluster] GNBD+RAID5
Shih-Che Huang
schuan2 at gmail.com
Wed Dec 1 04:48:52 UTC 2004
Hi Shih-Che,
If I understand correctly you have three nodes (PC's) with local
harddisks of size 35GB. You want to make a raid-5 array using these
three harddisks and at the sametime make this data accessible from all
the three nodes. Is it correct?
I am sorry but I didn't try GNBD myself yet. I don't know how much I
can help you. But let me try
>#gfs_mkfs -p lock_gulm -t alpha:gfstest -j 3 /dev/pool/storage
>#mount -t gfs /dev/pool/storage /gfs1
Instead of making one big pool of 70GB make three pools
/dev/pool/storage1,2, 3. and then you can mount them onto /gfs1,2,3.
You can then use Linux MD (Meta Devices) driver to create raid-5 array
(software raid).
You will edit /etc/raidtab, check this link:
http://www.tldp.org/HOWTO/Software-RAID-HOWTO-5.html#ss5.8
It's just my thought but I don't know if it works. You may want to
post this question and my reply to linux-cluster and see if any
experts can help you out.
Also, it looks like PVFS (Parallel Virtual File System) may address
your problem correctly. Check this:
Look at what is PVFS?
http://www.parl.clemson.edu/pvfs/pvfs-faq.html#what
Hope this helps!
Raj
On Tue, 30 Nov 2004 Shih-Che Huang wrote :
- Hide quoted text -
>Hi Raj,
>
>Following is my idea.
>Do you think that I can do it under GNBD?
>
>I have two nodes and each of them has 35GB, and then I want to combine two
>storage together to be 70GB.
>
>Under this condition, I can use this 70GB storage.
>
> Master /gfs1 (70 GB)
> |
> |
> / \
> / \
> Kh00 Kh01
> 35GB 35GB
>
>I import GNDB from Kh00 and Kh01 and then I had following storage.cfg
>
>poolname stogage
>minor subpools 2
>subpool 0 128 1 gfs_data
>subpool 1 128 1 gfs_data
>pooldevice 0 0 /dev/gnbd/gfstest
>pooldevice 1 0 /dev/gnbd/gfstest1
>
>=======================
>#gfs_mkfs -p lock_gulm -t alpha:gfstest -j 3 /dev/pool/storage
>#mount -t gfs /dev/pool/storage /gfs1
>
>After I mounted it, I can see 70GB storage and I can used it.
>
>we also want some
>RAID1 or RAID5-like redundancy in case of a hard drive failure.
>
>So it would be more like this:
>We have three nodes, kh00, kh01, kh02, each with 35 GB, that together
>give us a 70 GB GFS with 35 GB of parity/redundancy.
>It's more likely to be like this:
>We have four nodes, kh00, kh01, kh02, kh03, each with 35 GB, that are
>paired up in redundant copies, yielding 70 GB of usable storage.
>
>What should I do by using GNBD in GFS?
>Could you give me some suggestion?
>
>Shih-Che
--
Shih-Che Huang
More information about the Linux-cluster
mailing list