[Linux-cluster] dump(8) for GFS2

Steven Whitehouse swhiteho at redhat.com
Mon Mar 29 08:36:30 UTC 2010


Hi,

On Sun, 2010-03-28 at 02:58 +0000, Jankowski, Chris wrote:
> Steve, 
> 
> dump(8) cetrainly does (not* work on RHEL 5.4 and GFS2. It complains about wrong information in the superblock.
> 
> It is a very old code. I very much doubt that it would use FIEMAP that is a realtively recent development.
> 
> Thanks and regards,
> 
> Chris
> 
Ah, so it seems that it might be trying to read the sb directly. That is
generally a bad idea for GFS2 since there is unlikely to be any useful
information in the sb, and reading any other disk block on the fs
requires locking.

Either patching dump to use FIEMAP to map sparse files, or writing a
replacement sounds like the right way to go. I don't know enough about
the current dump code to comment on which would be easier,

Steve.

> -----Original Message-----
> From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Steven Whitehouse
> Sent: Saturday, 27 March 2010 00:21
> To: linux clustering
> Subject: Re: [Linux-cluster] dump(8) for GFS2
> 
> Hi,
> 
> On Fri, 2010-03-26 at 02:48 +0000, Jankowski, Chris wrote:
> > Hi,
> > 
> > Question:
> > ---------
> > Are there any plans to develop a backup utility working on the same principles as dump(8) does for ext3fs? This means getting the backup done by walking the block structure contained in the inodes instead of just reading the file the way tar(1), cpio(1) and others do it.
> > 
> Does dump use FIEMAP? If so it should "just work" on recent GFS2,
> 
> Steve.
> 
> 
> > I need dump(8) to deal with a specific problem created by the customer's application. A library used by the application has a bug which demonstrates itself in an upredictable creation of huge sparse files. For example the application may create a sparse file of 5TB with only a few kB of data in it. There may be tens of those files created in the database. GNU tar handles sparse files correctly and will recreate them as sparse files too.  This is fine.  But it still needs to read all of those nulls and that is done at a rate of about 1.5TB per hour on this system. With 100+ TB of the apparent size of all of those sparse filesmy bakup would take about 3 days to complete.
> > 
> > By comparison, dump(8) would deal with this situation perfectly well. It know the inodes of the file, will follow the few that exist in a sparse file and back up the data.  It does not have to read through the tons of nulls happily delivered by the OS, as tar(1) does.
> > 
> > Regards,
> > 
> > Chris
> > 
> > 
> > 
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> 
> 
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
> 
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster





More information about the Linux-cluster mailing list