[Linux-cluster] Re: [linux-lvm] Distributed LVM/filesystem/storage
s.wendy.cheng at gmail.com
Sun Jun 1 13:50:26 UTC 2008
Jan-Benedict Glaw wrote:
> On Sat, 2008-05-31 23:12:21 -0500, Wendy Cheng <s.wendy.cheng at gmail.com> wrote:
>> Jan-Benedict Glaw wrote:
>>> On Fri, 2008-05-30 09:03:35 +0100, Gerrard Geldenhuis <Gerrard.Geldenhuis at datacash.com> wrote:
>>>> On Behalf Of Jan-Benedict Glaw
>>>>> I'm just thinking about using my friend's overly empty harddisks for a
>>>>> common large filesystem by merging them all together into a single,
>>>>> large storage pool accessible by everybody.
>>>>> It would be nice to see if anybody of you did the same before (merging
>>>>> the free space from a lot computers into one commonly used large
>>>>> filesystem), if it was successful and what techniques
>>>>> (LVM/NBD/DM/MD/iSCSI/Tahoe/Freenet/Other P2P/...) you used to get there,
>>>>> and how well that worked out in the end.
>>>> Maybe have a look at GFS.
>>> GFS (or GFS2 fwiw) imposes a single, shared storage as its backend. At
>>> least I get that from reading the documentation. This would result in
>>> merging all the single disks via NBD/LVM to one machine first and
>>> export that merged volume back via NBD/iSCSI to the nodes. In case the
>>> actual data is local to a client, it would still be first send to the
>>> central machine (running LVM) and loaded back from there. Not as
>>> distributed as I hoped, or are there other configuration possibilities
>>> to not go that route?
>> However, with its symmetric architecture,
>> nothing can prevent it running on top of a group of iscsi disks (with
>> GFS node as initiator), as long as each node can see and access these
>> disks. It doesn't care where the iscsi targets live, nor how many there
> So I'd configure each machine's empty disk/partition as an iSCSI
> target and let them show up an every "client" machine and run that
> setup. How good will GFS deal with temporary (or total) outage of
> single targets? Eg. 24h disconnects with ADSL connectivity etc.?
High availability will not work well in this particular setup - it is
more about data and storage sharing between GFS nodes.
Note that GFS normally runs on top of CLVM (clustered lvm, in case you
don't know about it). You might want to check current (Linux) CLVM raid
level support to see whether it fits your needs.
More information about the Linux-cluster