[Linux-cluster] Cluster vs Distributed? & MySQL Cluster?

Michael Will mwill at penguincomputing.com
Wed Oct 25 20:02:12 UTC 2006


Are the actual data files shared in this setup between the active mysql 
daemons?

Last time I looked into this it seemed that with shared-nothing model each
mysql daemon would have to keep it's own copy of the data and updates
would be propagated from active to passive daemons (master-slave model)
or between active daemons (ndb in-ram database model)

Are the mysql daemons running on the GFS I/O nodes that have access to 
shared
storage via SAN or iSCSI and coordinate locking through GFS 
infrastructure, or are
the mysql daemons running on client nodes that use GFS to remotely 
access storage
that is provided by other  GFS I/O nodes that in turn have access to shared
storage via SAN or iSCSI?

Michael

David Brieck Jr. wrote:
> On 10/25/06, isplist at logicore.net <isplist at logicore.net> wrote:
>
>> PS: I saw someone asking about sharing data on MySQL, that's 
>> something I'd
>> love to do. In fact, I'd like to get rid of the big box IBM servers 
>> over using
>> smaller blade servers. Problem is, the blade servers don't allow for 
>> much
>> memory, from 512 to 2GB. the IBM's allow for 5GB's. But I wonder if I 
>> could
>> still get away with many low memory MySQL servers sharing GFS storage?
>> I would guess that one or more would write but that many could read.
>>
>> Mike
>>
>
> So far things seem to be working fairly well with multiple active
> MySQL servers. You can't use the query cache (for obvious reasons) and
> you can't use innodb tables but for the most part it's working well.
> The one thing I ran into that I didn't anticipate was that after you
> add, edit or remove a user or grants you need to flush the privileges
> on all the server manually. I haven't found a configuration option to
> tell MySQL not to cache those values so I'll probably just have to
> either modify my scripts to automatically flush after these actions or
> just have a cron job running on the nodes every 10 minutes or so to
> keep everything in sync.
>
> I did manage to get LVM DR working after some trouble initially. One
> thing I should note, you probably want to enable persistence,
> otherwise you really seem to take a performance hit.
>
> Here's the script I use to check to see if a server is alive:
>
> #!/bin/sh
>
> TEST=`/usr/bin/mysqladmin --user=piranha --password=piranha ping
> --host=$1 | grep -c "mysqld is alive"`
>
> if [ $TEST == "1" ]; then
>        echo "OK"
> else
>        echo "FAIL"
> fi
>
>
>
> One thing about servers with smaller amounts of RAM: it won't matter
> how many small servers you have if you have queries that constantly
> have to load large tables (mainly for sorts) into memory and you don't
> have that much your server will probably crawl.
>
> I should note we're just running our DNS servers (MyDNS) and our
> spamassasin database on it, but so far no problems. It was even
> inadvertently tested on night and everything worked perfectly. We'll
> know more once some of our larger databases are moved over.
>
> -- 
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster




More information about the Linux-cluster mailing list