[Linux-cluster] Backup GFS File system

Ben Yarwood ben.yarwood at juno.co.uk
Tue May 22 14:58:07 UTC 2007


We are using rsync to syncronise certain directories from the GFS file systems as we don’t need a full clone of the disks.  In the past we have done this on to an ext3 file system but this obviously has the limitation that it can’t be used by the original cluster if the primary storage fails.

Yes we are planning to do the backup from another server outside of the cluster.

Thanks for the info.

Ben


> -----Original Message-----
> From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of
> rhurst at bidmc.harvard.edu
> Sent: 22 May 2007 15:36
> To: linux-cluster at redhat.com
> Subject: Re: [Linux-cluster] Backup GFS File system
> 
> Are you cloning the disk(s)?  If so, are you backing up the volumes on another server, outside of the
> cluster?  That is our implementation, and we came up with a process that allows for that.  You need to
> install the GFS/CS components on that server, but do not need to run it (ccs / cman / fence, etc).  I
> marked the volumes with the lock_nolock protocol for mounting, although there is a way to use mount
> arguments to override what is on the volume (man gfs_mount).
> 
> Here's a snippet of a script we execute on the backup media server, after the clone is completed:
> 
> 
> 
> CLONE_NAME="watson-clone"
> VG="/dev/VGWATSON"
> 
> for RETRY in `seq 3 -1 0`; do
>         DEVICE="`sudo ${POWERMT} display dev=all | grep -B 3 ${CLONE_NAME} | grep name=emcpower | awk
> -F= '{print $2}'`"
>         [ -n "${DEVICE}" ] && break;
>         sleep 5
> done
> if [ -z "${DEVICE}" ]; then
>         echo "NO PowerPath device found for ${CLONE_NAME}"
>         exit -1
> fi
> sudo ${PARTED} /dev/${DEVICE} set 1 lvm on
> sudo ${VGRENAME} ${VG} ${VG}-CLONE 2> /dev/null
> sudo ${VGCHANGE} -a y --ignorelockingfailure ${VG}-CLONE 2> /dev/null
> sudo ${GFS_TOOL} sb ${VG}-CLONE/lvol0 proto lock_nolock <<-EOD
> y
> EOD
> sudo ${GFS_TOOL} sb ${VG}-CLONE/lvol1 proto lock_nolock <<-EOD
> y
> EOD
> sudo mount -t gfs ${VG}-CLONE/lvol0 /bcv/ccc/watson
> sudo mount -t gfs ${VG}-CLONE/lvol1 /bcv/ccc/watson/wav
> sudo mount -t ext3 ${VG}-CLONE/lvoldata /bcv/ccc/watson-data
> sudo mount -t ext3 ${VG}-CLONE/lvoldb1 /bcv/ccc/watson-data/sys/db1
> 
> 
> On Tue, 2007-05-22 at 14:56 +0100, Ben Yarwood wrote:
> 
> 
> 	I intend to create a backup of my GFS6.1 file systems (3 Node cluster) on a single backup
> machine and wanted to check some facts.
> 
> 
> 	1.  To run a GFS filesystem with nolock as the lock protocol, do I need the rest of the cluster
> infrastructure?
> 
> 
> 	2.  If I do need the rest of the cluster architecture, can you have a one node cluster?
> 
> 
> 	3.  In the event of the primary storage failing and the backup being used, I can convert the
> lock protocol to dlm using gfs_tool
> 	e.g. gfs_tool sd /dev/sdx proto lock_dlm ?
> 
> 
> 	Thanks
> 	Ben
> 
> 
> 
> 
> 
> 
> 	--
> 	Linux-cluster mailing list
> 	Linux-cluster at redhat.com
> 	https://www.redhat.com/mailman/listinfo/linux-cluster
> 
> 
> Robert Hurst, Sr. Caché Administrator
> Beth Israel Deaconess Medical Center
> 1135 Tremont Street, REN-7
> Boston, Massachusetts   02120-2140
> 617-754-8754 ∙ Fax: 617-754-8730 ∙ Cell: 401-787-3154
> Any technology distinguishable from magic is insufficiently advanced.







More information about the Linux-cluster mailing list