[Linux-cluster] SSI, Virtual Servers, ShareRoot, Etc

gordan at bobich.net gordan at bobich.net
Thu Jan 24 10:25:48 UTC 2008


>> What I badly need right now is a shared root style system. Perhaps where all
>> nodes boot from the FC SAN using their HBA's and all have access to GFS
>> storage all around the network.
>>
>> There are various reasons I would like to do this but one of them also
>> includes trying to save on power. Say I took 32 machines and was able to get
>> them all booting off the network without drives, then I could use a 12 drive
>> FC chassis as the boot server.

You could do that, but see the comment further down about needing local 
disks for at least scratch space. It ultimately depends on your 
application.

>> What I had worked on last year was partitioning one of these chassis into 32
>> partitions, one for each system but I think there is a better way and, maybe
>> even gaining some benefits.

Indeed. That sounds rather like you were using a SAN just for the sake of 
using a SAN, and taking all the disadvantages without any of the 
advantages.

>> The problem with that was that partitions were
>> fixed and inaccessible as individual partitions once formatted on the storage
>> chassis. A shared root system would be better because then I don't have to
>> have fixed partitions, just files. Then, each node would have it's storage
>> over other storage chassis on the network. This is what I would like to
>> achieve, so far, without success.

Not sure what you mean there, the last two sentences didn't quite parse. 
Can you please elaborate?

> maybe before going right into SSI which seems to be quite an effort to
> get running, you might want to think about "cloning" system images by
> using any kind of volume manager and copy on write volumes. lvm2 for
> instance does support this.

That would make things more complicated to set up AND more complicated to 
maintain and keep in sync afterwards. SSI isn't hard to set up. Follow the 
OSR howto and you'll have it up and running in no time.

The only thing I'd do differently than the howto is that I wouldn't 
unshare the whole of /var, but only /var/log, /var/lock and /var/run (off 
the top of my head. It's useful to keep things like /var/cache shared. But 
that's all fairly minor stuff. The howto will get you up and running.

> you basically create one master image, do all the basic configuration,
> such as routes, connection to your ldap server, whatever, and then just
> make snapshots of that image for every node / vm you want to create.
> you can make those snapshots writeable, which basically creates a
> copy-on-write datafile.

Or you could just create a shared root volume after you've set up the 
first one, and get everything talking to that. The only minor downside is 
that you still need a local disk for the initrd base root (otherwise you 
waste about 120MB of RAM), swap and /tmp, plus any other major file 
systems you want unshared for performance (e.g. replicated DB copies that 
are replicated outside of the clustering framework).

The major upshot of SSI is that you only need to manage one file system, 
which means you can both use smaller disks in the nodes, and save yourself 
the hassle of keeping all the packages/libraries/configs in sync.

Gordan




More information about the Linux-cluster mailing list