[Linux-cluster] SSI, Virtual Servers, ShareRoot, Etc

isplist at logicore.net isplist at logicore.net
Fri Jan 25 04:55:24 UTC 2008


> Indeed. That sounds rather like you were using a SAN just for the sake of
> using a SAN, and taking all the disadvantages without any of the
> advantages.

I'm not sure what you mean. I use an FC SAN because it allows me to separate 
the storage from the machine. 
What I was hoping to do with the partitions was to give each blade it's own 
boot and scratch space, then allow each blade to have/use shared storage for 
the rest. I was hoping to boot the machines from FC or perhaps PXE. Then 
someone mentioned OpenSharedroot which sounded more interesting than carving 
up a chassis into a bunch of complicated partitions. Just never got back to it 
but want to again now. I badly want to eliminate the drives from each blade in 
place of PXE or FC boot with something such as a sharedroot.

> A shared root system would be better because then I don't have
> to have fixed partitions, just files.
> Then, each node would have it's storage over other storage chassis on 
> the network.

> Not sure what you mean there, the last two sentences didn't quite parse.
> Can you please elaborate?

These two statements you mean? 

What I meant was first, that when I was trying this out, I was not aware of 
the sharedroot project. However, I could take one of my 12-drive FC chassis 
and partition it into say 32 partitions. My hope was to be able to boot 32 
machines from that storage chassis. So, for the cost of running 12 drives in a 
RAID5 array, I would eliminate 32 drives of all sorts of sizes for 12.

Since I was not able to boot from FC for what ever reason, I found that I 
could install a small cheap flash IDE card into each blade for their /boot 
partition, then it's main / partition would be the storage device. This worked 
but of course I ran into other problems.
The problem had to do with not just zoning complications but that the storage 
device was static in that if say I needed to do something with a certain 
partition, I was unable to make any changes unless I changed all of the 
partitions. Not a good idea.

The second part was that I was going to make that partition above, only the 
operating system partition, everything else would use central storage over the 
FC network. This way, I would not waste drive space on machines which didn't 
use what we thought they would and reliability would be better since all 
machines would automatically gain the RAID security. 

>> maybe before going right into SSI which seems to be quite an effort to
>> get running, you might want to think about "cloning" system images by
>> using any kind of volume manager and copy on write volumes. lvm2 for
>> instance does support this.

But my thinking is not about ease of creating servers, it is about wasting the 
resources of servers which are sitting there idle for the most part. Important 
machines yet when they aren't doing anything, really, just wasted resources. 
My curiosity was about creating a cluster of machines, which could use all of 
the processing power of the others when needed. Instead of having a bunch of 
machines sitting around mostly idle, when something came up, any one of them 
could use what it needed for resources, better utilizing the resources. 

That, of course, is making the assumption that I would be using applications 
which put to use such resources as an SSI cluster.

> That would make things more complicated to set up AND more complicated to
> maintain and keep in sync afterwards. SSI isn't hard to set up. Follow the
> OSR howto and you'll have it up and running in no time.

 From what I understand of SSI clusters, the applications have to be able to 
put that sort of processing to proper use. While I'm using the term SSI, I am 
really just asking everyone here for thoughts :). 

All I really mean is kind of where computing is going anyhow. Build something 
very powerful (SSI for lack of better word), allow it to be sliced as many 
times as required (VM), allow all of it's power to be used by any one or more 
requests or share it amongst request automatically. 
On the resources side, make it easy to add more power by easily being able to 
add servers, memory, storage, what ever, into the mix. Isn't this where 
computing is heading?

In my case, I just want to stop wasting all the power I'm wasting, have 
something flexible, easy to grow and manage. 
 
> the top of my head. It's useful to keep things like /var/cache shared. But
> that's all fairly minor stuff. The howto will get you up and running.

I'll take a peek, thanks much.
 
> The major upshot of SSI is that you only need to manage one file system,
> which means you can both use smaller disks in the nodes, and save yourself
> the hassle of keeping all the packages/libraries/configs in sync.

The very first thing I'd like to achieve is being able to get rid of the 
drives in all of my machines in place of a FC HBA or using PXE. Then using 
central storage for each servers needs from there.

On SSI, again, this is where it is unclear to me and perhaps I am using the 
wrong term. I understand SSI as meaning a single system image but one which 
you can only take advantage of with special applications. In other words, a 
LAMP system would not take advantage of it.

Mike






More information about the Linux-cluster mailing list