From lists at nerdbynature.de Fri May 1 21:09:21 2009 From: lists at nerdbynature.de (Christian Kujau) Date: Fri, 1 May 2009 14:09:21 -0700 (PDT) Subject: Undeletion utililty for ext3/4 In-Reply-To: <938911.2430.qm@web43504.mail.sp1.yahoo.com> References: <938911.2430.qm@web43504.mail.sp1.yahoo.com> Message-ID: On Thu, 30 Apr 2009, Number9652 wrote: > I have recently released a project on sourceforge > ( http://extundelete.sourceforge.net ) that can undelete a file from an > ext3 or ext4 partition. It uses code from ext3grep to parse > command-line options, and uses libext2fs to read the partitions. > Instead of reading the entire partition, as ext3grep does, it reads > only the journal file and is able to restore a deleted file from the > information there and in (possibly deleted) directory blocks. I hope it > is of some use. Thanks - I've added this to the collection of ext2/3 undeletion tools in the wiki: http://ext4.wiki.kernel.org/index.php/Undeletion Christian. -- Bruce Schneier once broke AES using nothing but six feet of rusty barbed wire, a toothpick, and the front axle from a 1962 Ford Falcon. From d_baron at 012.net.il Sat May 2 20:39:56 2009 From: d_baron at 012.net.il (David Baron) Date: Sat, 02 May 2009 23:39:56 +0300 Subject: Undeletion utililty for ext3/4 In-Reply-To: <20090502160014.2816B61987E@hormel.redhat.com> References: <20090502160014.2816B61987E@hormel.redhat.com> Message-ID: <200905022339.59367.d_baron@012.net.il> On Saturday 02 May 2009 19:00:14 ext3-users-request at redhat.com wrote: > have recently released a project on sourceforge > > > ( http://extundelete.sourceforge.net ) that can undelete a file from an > > ext3 or ext4 partition. It uses code from ext3grep to parse > > command-line options, and uses libext2fs to read the partitions. > > Instead of reading the entire partition, as ext3grep does, it reads > > only the journal file and is able to restore a deleted file from the > > information there and in (possibly deleted) directory blocks. I hope it > > is of some use. > > Thanks - I've added this to the collection of ext2/3 undeletion tools in > the wiki: http://ext4.wiki.kernel.org/index.php/Undeletion Look good. Never had much luck with ext3grep. However, cannot compile it. Has these huge hex constants that will not fit usual long types. Does this code need a 64-bit kernel or can I do something to the #defined I64 type to get this to compile? From sandeen at redhat.com Sat May 2 21:15:12 2009 From: sandeen at redhat.com (Eric Sandeen) Date: Sat, 02 May 2009 16:15:12 -0500 Subject: Undeletion utililty for ext3/4 In-Reply-To: <200905022339.59367.d_baron@012.net.il> References: <20090502160014.2816B61987E@hormel.redhat.com> <200905022339.59367.d_baron@012.net.il> Message-ID: <49FCB7E0.2060404@redhat.com> David Baron wrote: > On Saturday 02 May 2009 19:00:14 ext3-users-request at redhat.com wrote: >> have recently released a project on sourceforge >> >>> ( http://extundelete.sourceforge.net ) that can undelete a file from an >>> ext3 or ext4 partition. It uses code from ext3grep to parse >>> command-line options, and uses libext2fs to read the partitions. >>> Instead of reading the entire partition, as ext3grep does, it reads >>> only the journal file and is able to restore a deleted file from the >>> information there and in (possibly deleted) directory blocks. I hope it >>> is of some use. >> Thanks - I've added this to the collection of ext2/3 undeletion tools in >> the wiki: http://ext4.wiki.kernel.org/index.php/Undeletion > > Look good. Never had much luck with ext3grep. > > However, cannot compile it. Has these huge hex constants that will not fit > usual long types. Does this code need a 64-bit kernel or can I do something to > the #defined I64 type to get this to compile? I haven't really looked over it in any detail, but try sticking a "ULL" on the end of those constants. -Eric From lakshmipathi.g at gmail.com Sun May 3 05:22:27 2009 From: lakshmipathi.g at gmail.com (lakshmi pathi) Date: Sun, 3 May 2009 01:22:27 -0400 Subject: How to build a new root file system? Message-ID: Hi, I'm trying to create new root file system for my kernel. How to do that? Any documentation links ? I have spent some time with Linux From Scratch - but messed somewhere? Should i process with LFS again ? I'm trying to find a solution for question posted here. http://www.linuxforums.org/forum/linux-kernel/144378-booting-custom-kernel-new-file-system-new-post.html -- Cheers, Lakshmipathi.G From lists at nerdbynature.de Sun May 3 17:03:44 2009 From: lists at nerdbynature.de (Christian Kujau) Date: Sun, 3 May 2009 10:03:44 -0700 (PDT) Subject: [OT] Re: Undeletion utililty for ext3/4 In-Reply-To: <938911.2430.qm@web43504.mail.sp1.yahoo.com> References: <938911.2430.qm@web43504.mail.sp1.yahoo.com> Message-ID: On Thu, 30 Apr 2009, Number9652 wrote: > I have recently released a project on sourceforge > ( http://extundelete.sourceforge.net ) that can undelete a file from an > ext3 or ext4 partition. It uses code from ext3grep to parse > command-line options, and uses libext2fs to read the partitions. Hm, compiling with g++ 4.4 gave me a few compiling errors[0] - the patch attached "fixes" them, but when extundelete is actually used, it crashes: # ./extundelete /dev/md0 Running extundelete version 0.0.3 extundelete: extundelete.cc:894: void load_super_block(struct_ext2_filsys*): Assertion `(super_block.s_feature_compat & 0x0004)' failed. Aborted ...but maybe that has been caused by the patch. Hm. Christian. [0] http://nerdbynature.de/bits/extundelete/ -- All infinite sets are countable -- by Bruce Schneier. -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: extundelete_incl.diff.txt URL: From adilger at sun.com Sun May 3 17:35:51 2009 From: adilger at sun.com (Andreas Dilger) Date: Sun, 03 May 2009 11:35:51 -0600 Subject: [OT] Re: Undeletion utililty for ext3/4 In-Reply-To: References: <938911.2430.qm@web43504.mail.sp1.yahoo.com> Message-ID: <20090503173551.GO3209@webber.adilger.int> On May 03, 2009 10:03 -0700, Christian Kujau wrote: > On Thu, 30 Apr 2009, Number9652 wrote: > > I have recently released a project on sourceforge > > ( http://extundelete.sourceforge.net ) that can undelete a file from an > > ext3 or ext4 partition. It uses code from ext3grep to parse > > command-line options, and uses libext2fs to read the partitions. > > Hm, compiling with g++ 4.4 gave me a few compiling errors[0] - the patch > attached "fixes" them, but when extundelete is actually used, it crashes: > > # ./extundelete /dev/md0 > Running extundelete version 0.0.3 > extundelete: extundelete.cc:894: void load_super_block(struct_ext2_filsys*): > Assertion `(super_block.s_feature_compat & 0x0004)' failed. > Aborted > > ...but maybe that has been caused by the patch. Hm. This is probably due to a new ext4 feature. Look at this line of the code and see what feature it is checking for. > Christian. > > [0] http://nerdbynature.de/bits/extundelete/ > -- > All infinite sets are countable -- by Bruce Schneier. > diff -Nrup extundelete-0.0.3/src/insertionops.cc extundelete-0.0.3.edited/src/insertionops.cc > --- extundelete-0.0.3/src/insertionops.cc 2009-04-28 20:17:32.000000000 +0200 > +++ extundelete-0.0.3.edited/src/insertionops.cc 2009-05-03 12:54:14.000000000 +0200 > @@ -8,6 +8,8 @@ > #include > #include "kernel-jbd.h" > #include "undel.h" > +#include > +#include > > // Below are a bunch of functions to allow us to print information > // about various types of data we encounter in this program. > diff -Nrup extundelete-0.0.3/src/undel-priv.h extundelete-0.0.3.edited/src/undel-priv.h > --- extundelete-0.0.3/src/undel-priv.h 2009-04-28 20:17:32.000000000 +0200 > +++ extundelete-0.0.3.edited/src/undel-priv.h 2009-05-03 12:50:39.000000000 +0200 > @@ -4,6 +4,7 @@ > #include > #include > #include > +#include > > // Global variables > #ifdef USE_SVN > _______________________________________________ > Ext3-users mailing list > Ext3-users at redhat.com > https://www.redhat.com/mailman/listinfo/ext3-users Cheers, Andreas -- Andreas Dilger Sr. Staff Engineer, Lustre Group Sun Microsystems of Canada, Inc. From lists at nerdbynature.de Sun May 3 22:08:10 2009 From: lists at nerdbynature.de (Christian Kujau) Date: Sun, 3 May 2009 15:08:10 -0700 (PDT) Subject: [OT] Undeletion utililty for ext3/4 In-Reply-To: <20090503173551.GO3209@webber.adilger.int> References: <938911.2430.qm@web43504.mail.sp1.yahoo.com> <20090503173551.GO3209@webber.adilger.int> Message-ID: On Sun, 3 May 2009, Andreas Dilger wrote: > This is probably due to a new ext4 feature. Look at this line of the > code and see what feature it is checking for. Hm, at extundelete.cc:894 we have: // File system must have a journal. assert((super_block.s_feature_compat & EXT3_FEATURE_COMPAT_HAS_JOURNAL)); if ((super_block.s_feature_compat & EXT2_FEATURE_COMPAT_DIR_PREALLOC)) std::cout << "WARNING: I don't know what EXT2_FEATURE_COMPAT_DIR_PREALLOC is.\n"; The EXT3_FEATURE_COMPAT_HAS_JOURNAL is a standard ext3/4 feature, not sure about the EXT2_FEATURE_COMPAT_DIR_PREALLOC thing. Thanks for replying, Andreas - but I could imagine that this is better discussed on the extundelete lists - sorry for this noise. Christian. -- Bruce Schneier has found SHA-512 preimages of all these facts. From sandeen at redhat.com Tue May 5 15:23:22 2009 From: sandeen at redhat.com (Eric Sandeen) Date: Tue, 05 May 2009 10:23:22 -0500 Subject: File System Selection In-Reply-To: <1240983206.446412847@192.168.1.35> References: <1240983206.446412847@192.168.1.35> Message-ID: <4A0059EA.4090904@redhat.com> Ramesh wrote: > Hi All, > > > I am developing a SD Block Driver. > > As per old specification (SD Spec 2.0 ) Maximum size of SD memory > card is 32 GB. - We used ext2 file system. > > By referring the new Specification (SD Spec 3.0) SD memory card size > is reached upto and including 2TB (Terra Byte) - Block size strictly > limited to 512 only (as per specification). > > My Questions. > > 1. For 2TB disk with Block size 512, Which file system is preferred > (ext3/ext4) do you mean sector size of the block device, or block size of the fileystem? I guess it doesn't matter much either way, 2^32*512 is 2T. Either ext3 or ext4 can handle this size, you'll probably need to make your decision based on other factors. > 2. In a 32 bit machine, If I installed the Fedora 10 ( having ext4), > am I able to use it as effectively ( for the maximum disk/file size > usage). To utilize 2TB or more size hard disk, is this allowable to > use 32 bit machine with Ext4 fs? > On a 32 bit machine you will be limited to 16T, this is actually a page cache limitation. But 2T should be fine. -Eric > Thanks in advance. > > Regards, Ramesh > From ross at biostat.ucsf.edu Tue May 5 18:40:03 2009 From: ross at biostat.ucsf.edu (Ross Boylan) Date: Tue, 05 May 2009 11:40:03 -0700 Subject: Some inode questions Message-ID: <1241548803.11137.12.camel@iron.psg.net> When I first create /var I took all the defaults. I have since decided that, since it will hold a cyrus mail spool (each message is a file) I should use something with more inodes. I created a new (var2) partition and formatted it with # mkfs.ext3 -T news /dev/mapper/turtle-var2_crypt # news has inode_ratio = 4096 Then I mounted and rsync'd from my existing /var. Afterwords, I get a report that seems to indicate I've used almost no inodes. It also shows more inodes than blocks; is there any way one could need more than one inode/block? # dumpe2fs -h /dev/mapper/turtle-var2_crypt dumpe2fs 1.41.3 (12-Oct-2008) Filesystem volume name: Last mounted on: Filesystem UUID: 823219cf-30dc-42f9-ac96-1112bc7fe070 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 6291456 Block count: 6291199 Reserved block count: 314559 Free blocks: 5853517 Free inodes: 6291445 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 1022 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 32768 Inode blocks per group: 2048 Filesystem created: Tue May 5 11:10:32 2009 Last mount time: Tue May 5 11:15:34 2009 Last write time: Tue May 5 11:15:34 2009 Mount count: 1 Maximum mount count: 23 Last checked: Tue May 5 11:10:32 2009 Check interval: 15552000 (6 months) Next check after: Sun Nov 1 10:10:32 2009 Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 95194688-7a78-4087-b013-1aec3e7a8436 Journal backup: inode blocks Journal size: 128M As I read this, 6291445 of 6291456 inodes are free, so 11 are in use. The comparable calculation on the origin file system shows about 8,500 inodes in use. I'm on a 2.6.29 kernel, building file systems on top of encrypted partitions, which in turn are on LVM logical volumes. e2fsprogs 1.41.3-1 on Debian Lenny, amd64 architecture, Xeon chips. Can anyone explain what's going on? Is -h not giving me the complete inode story? Thanks. -- Ross Boylan wk: (415) 514-8146 185 Berry St #5700 ross at biostat.ucsf.edu Dept of Epidemiology and Biostatistics fax: (415) 514-8150 University of California, San Francisco San Francisco, CA 94107-1739 hm: (415) 550-1062 From adilger at sun.com Tue May 5 21:59:45 2009 From: adilger at sun.com (Andreas Dilger) Date: Tue, 05 May 2009 15:59:45 -0600 Subject: Some inode questions In-Reply-To: <1241548803.11137.12.camel@iron.psg.net> References: <1241548803.11137.12.camel@iron.psg.net> Message-ID: <20090505215945.GQ3209@webber.adilger.int> On May 05, 2009 11:40 -0700, Ross Boylan wrote: > When I first create /var I took all the defaults. I have since decided > that, since it will hold a cyrus mail spool (each message is a file) I > should use something with more inodes. I created a new (var2) partition > and formatted it with > # mkfs.ext3 -T news /dev/mapper/turtle-var2_crypt > # news has inode_ratio = 4096 > > Then I mounted and rsync'd from my existing /var. > Afterwords, I get a report that seems to indicate I've used almost no > inodes. It also shows more inodes than blocks; is there any way one > could need more than one inode/block? Hard links, or empty files... > As I read this, 6291445 of 6291456 inodes are free, so 11 are in use. > The comparable calculation on the origin file system shows about 8,500 > inodes in use. Indeed, it seems your new filesystem is empty. That said, the superblock contents are not updated on disk while the filesystem is mounted. I have argued that since we are already computing the superblock totals and storing them into the superblock it wouldn't be harmful to write the superblock to disk occasionally in ext[34]_statfs() by calling at the end: ext[34]_commit_super(sb, es, 0); I don't think there is currently anything in ext[34] that is writing the superblock to disk at all, except mount and unmount. Using "df -i" should give you accurate numbers for a mounted filesystem. Cheers, Andreas -- Andreas Dilger Sr. Staff Engineer, Lustre Group Sun Microsystems of Canada, Inc. From claudiu.perta at gmail.com Wed May 6 07:33:06 2009 From: claudiu.perta at gmail.com (Claudiu Perta) Date: Wed, 06 May 2009 09:33:06 +0200 Subject: How to add undelete capabilities to an ext3 file system Message-ID: <4A013D32.6040407@gmail.com> Hi, We are two students in computer science and we are working on adding undelete capabilities to the ext3 file system. In our current solution, we modified the ext3 kernel module and used the EXT2_UNDEL_DIR_INO reserved inode. Basically, whenever we remove a file/directory, we first save a copy of the inode along with the complete path of the file/directory to remove; then we delete the inode but not the data blocks of the file. This way it is possible to restore a previously deleted file to its original position in the file system. We handle deleted files with a FIFO-based policy. The dimension of the FIFO queue is defined by the user, when the file system is created. To avoid having on the FIFO queue too many temporary files, a user defined filter is applied before saving the inode and the path of a deleted file. The filter consists of pairs (directory name, file extensions) which can be added/deleted on-line by the user (we are using ioctl() to communicate between kernel and user space). All these information are kept on a file accessed via EXT2_UNDEL_DIR_INO inode; this file is non linked to the root directory, so it is not visible to the user. We would like to know what you think about this solution and if there is a better approach to address this problem. Thanks, Antonio Davoli & Claudiu Perta From ramesh at arasan.com Wed May 6 11:37:44 2009 From: ramesh at arasan.com (Ramesh) Date: Wed, 6 May 2009 17:07:44 +0530 (IST) Subject: File System Selection Message-ID: <1241609864.806615859@192.168.1.201> Hi Eric, Thanks for your prompt and informative reply. >>> do you mean sector size of the block device, or block size of the fileystem? For our device sector size is 4906 bytes. But the maximum allowed data chunk to read/write is 512( a.k.a Block size), restricted by specification. By referring the wiki pages of EXT3 (http://en.wikipedia.org/wiki/Ext3), I saw the below table. Block size Max file size Max filesystem size 1 KiB 16 GiB <2 TiB 2 KiB 256 GiB <4 TiB 4 KiB 2 TiB <8 TiB 8 KiB[limits 1] 2 TiB <16 TiB And by taking the values with the table, then for 512 bytes block size, Max file system supported is 1 TB only. Please correct me, if I assumed wrongly. >>> I guess it doesn't matter much either way, 2^32*512 is 2T. In that 32 bit, it using the MSB as signed bit. So it can use maximum of 31 bits only. Is this correct? >>> On a 32 bit machine you will be limited to 16T, this is actually a page cache limitation. But 2T should be fine. Please clarify me that Ext4 is using a 48 bit addressing. Is this necessary to go for 64 bit machines to utilize Ext4 and manage up to and including 2TB size file system... Please clarify me. Thanks in advance. Regards, Ramesh -----Original Message----- From: "Eric Sandeen" Sent: Tuesday, 5 May, 2009 8:53pm To: "Ramesh" Cc: ext3-users at redhat.com, linux-ext4 at vger.kernel.org Subject: Re: File System Selection Ramesh wrote: > Hi All, > > > I am developing a SD Block Driver. > > As per old specification (SD Spec 2.0 ) Maximum size of SD memory > card is 32 GB. - We used ext2 file system. > > By referring the new Specification (SD Spec 3.0) SD memory card size > is reached upto and including 2TB (Terra Byte) - Block size strictly > limited to 512 only (as per specification). > > My Questions. > > 1. For 2TB disk with Block size 512, Which file system is preferred > (ext3/ext4) do you mean sector size of the block device, or block size of the fileystem? I guess it doesn't matter much either way, 2^32*512 is 2T. Either ext3 or ext4 can handle this size, you'll probably need to make your decision based on other factors. > 2. In a 32 bit machine, If I installed the Fedora 10 ( having ext4), > am I able to use it as effectively ( for the maximum disk/file size > usage). To utilize 2TB or more size hard disk, is this allowable to > use 32 bit machine with Ext4 fs? > On a 32 bit machine you will be limited to 16T, this is actually a page cache limitation. But 2T should be fine. -Eric > Thanks in advance. > > Regards, Ramesh > ATTENTION: The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited by law. If you have received this message in error, please immediately notify the sender and/or Arasan Chip Systems, Inc. by telephone at (408) 282-1600 and delete or destroy any copy of this message. From sandeen at redhat.com Wed May 6 15:04:19 2009 From: sandeen at redhat.com (Eric Sandeen) Date: Wed, 06 May 2009 10:04:19 -0500 Subject: File System Selection In-Reply-To: <1241609864.806615859@192.168.1.201> References: <1241609864.806615859@192.168.1.201> Message-ID: <4A01A6F3.9010306@redhat.com> Ramesh wrote: > Hi Eric, > > Thanks for your prompt and informative reply. > >>>> do you mean sector size of the block device, or block size of >>>> the fileystem? > For our device sector size is 4906 bytes. But the maximum allowed > data chunk to read/write is 512( a.k.a Block size), restricted by > specification. > > By referring the wiki pages of EXT3 > (http://en.wikipedia.org/wiki/Ext3), I saw the below table. > > Block size Max file size Max filesystem size > 1 KiB 16 GiB <2 TiB > 2 KiB 256 GiB <4 TiB > 4 KiB 2 TiB <8 TiB > 8 KiB[limits 1] 2 TiB <16 TiB Above, block size means the filesystem block size. For ext3, all 32 bits should be safe on recent kernels and userspace, so I think the max filesystem sizes listed above are too small by half. IOW, 4k filesystem blocks -> 16T max filesystem size. > And by taking the values with the table, then for 512 bytes block > size, Max file system supported is 1 TB only. Please correct me, if I > assumed wrongly. you cannot have a 512 byte block size in ext3, 1k is the minimum. >>>> I guess it doesn't matter much either way, 2^32*512 is 2T. > > In that 32 bit, it using the MSB as signed bit. So it can use maximum > of 31 bits only. Is this correct? all 32 bits should be safe now. >>>> On a 32 bit machine you will be limited to 16T, this is >>>> actually a page cache limitation. But 2T should be fine. > > Please clarify me that Ext4 is using a 48 bit addressing. Is this > necessary to go for 64 bit machines to utilize Ext4 and manage up to > and including 2TB size file system... Please clarify me. The ext4 ondisk format does use 48 bits for physical addressing, but userspace is still 32 bits only even for ext4. -Eric > Thanks in advance. > > > Regards, Ramesh From ross at biostat.ucsf.edu Wed May 6 15:55:52 2009 From: ross at biostat.ucsf.edu (Ross Boylan) Date: Wed, 06 May 2009 08:55:52 -0700 Subject: Some inode questions In-Reply-To: <20090505215945.GQ3209@webber.adilger.int> References: <1241548803.11137.12.camel@iron.psg.net> <20090505215945.GQ3209@webber.adilger.int> Message-ID: <1241625352.5366.2.camel@corn.betterworld.us> On Tue, 2009-05-05 at 15:59 -0600, Andreas Dilger wrote: > On May 05, 2009 11:40 -0700, Ross Boylan wrote: > > When I first create /var I took all the defaults. I have since decided > > that, since it will hold a cyrus mail spool (each message is a file) I > > should use something with more inodes. I created a new (var2) partition > > and formatted it with > > # mkfs.ext3 -T news /dev/mapper/turtle-var2_crypt > > # news has inode_ratio = 4096 > > > > Then I mounted and rsync'd from my existing /var. > > Afterwords, I get a report that seems to indicate I've used almost no > > inodes. It also shows more inodes than blocks; is there any way one > > could need more than one inode/block? > > Hard links, or empty files... > > > As I read this, 6291445 of 6291456 inodes are free, so 11 are in use. > > The comparable calculation on the origin file system shows about 8,500 > > inodes in use. > > Indeed, it seems your new filesystem is empty. That said, the superblock > contents are not updated on disk while the filesystem is mounted. I have > argued that since we are already computing the superblock totals and > storing them into the superblock it wouldn't be harmful to write the > superblock to disk occasionally in ext[34]_statfs() by calling at the end: > > ext[34]_commit_super(sb, es, 0); > > I don't think there is currently anything in ext[34] that is writing > the superblock to disk at all, except mount and unmount. > > Using "df -i" should give you accurate numbers for a mounted filesystem. > > Cheers, Andreas Thank you. df -i does look sane, and after umount, dumpe2fs does also. There were actually slightly more inodes in use on the original (var) than the copy (var2), even right after an rsync. I'm guessing that might be files that are open but deleted. Ross From s.cislaghi at gmail.com Thu May 7 13:13:21 2009 From: s.cislaghi at gmail.com (Stefano Cislaghi) Date: Thu, 7 May 2009 15:13:21 +0200 Subject: Ext3 corruption using cluster Message-ID: Hello all, I've a cluster with an oracle database. The shared filesystem is provided from a SAN and there's LVM and ext3 fs. I've experienced some problem. During a normal switch of my cluster remounting FS on second node gave me problem. FS is corrupted. During a normal switch, operations done are: - oracle shutdown abort - oracle listernet shutdown - umount fs (using umount -l ) Also, if a system crash (support for a power loss) I'm really not sure that FS can be remounted. In fact FS is mounted and become readonly. There's a way to prevent this problem? Maximize journal size? Removing the remount readonly due to a supposed FS problem? Thanks Stefano -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at nerdbynature.de Sat May 9 07:59:56 2009 From: lists at nerdbynature.de (Christian Kujau) Date: Sat, 9 May 2009 00:59:56 -0700 (PDT) Subject: Ext3 corruption using cluster In-Reply-To: References: Message-ID: On Thu, 7 May 2009, Stefano Cislaghi wrote: > During a normal switch, operations done are: > - oracle shutdown abort > - oracle listernet shutdown > - umount fs (using umount -l ) I'm not all too Oracle cluster savvy, but this lazy umount looks kinda suspicious. From the manpage: > Detach the filesystem from the filesystem hierarchy now, and cleanup > all references to the filesystem as soon as it is not busy anymore My wild guess: node1 has been shut down, did a lazy umount, so that node2 could mount it but node1 was still writing to the fs (i.e. it was still in use)? Christian. -- Bruce Schneier's first program was encrypt world. From s.cislaghi at gmail.com Sat May 9 08:25:57 2009 From: s.cislaghi at gmail.com (Stefano Cislaghi) Date: Sat, 9 May 2009 10:25:57 +0200 Subject: Ext3 corruption using cluster In-Reply-To: References: Message-ID: Maybe... looking around some solutions can be: - maximize journal size - journaling all data and metadata (mount -o data=journal) Ste 2009/5/9 Christian Kujau > On Thu, 7 May 2009, Stefano Cislaghi wrote: > > During a normal switch, operations done are: > > - oracle shutdown abort > > - oracle listernet shutdown > > - umount fs (using umount -l ) > > I'm not all too Oracle cluster savvy, but this lazy umount looks > kinda suspicious. From the manpage: > > > Detach the filesystem from the filesystem hierarchy now, and cleanup > > all references to the filesystem as soon as it is not busy anymore > > My wild guess: node1 has been shut down, did a lazy umount, so that > node2 could mount it but node1 was still writing to the fs (i.e. it was > still in use)? > > Christian. > -- > Bruce Schneier's first program was encrypt world. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adilger at sun.com Sat May 9 08:58:28 2009 From: adilger at sun.com (Andreas Dilger) Date: Sat, 09 May 2009 02:58:28 -0600 Subject: Ext3 corruption using cluster In-Reply-To: References: Message-ID: <20090509085828.GK3209@webber.adilger.int> On May 09, 2009 10:25 +0200, Stefano Cislaghi wrote: > Maybe... looking around some solutions can be: > - maximize journal size > - journaling all data and metadata (mount -o data=journal) No, these have nothing to do with your problem. If you are running in a failover environment you need to STONITH the failing server BEFORE the backup server is trying to take over. > > Ste > > > 2009/5/9 Christian Kujau > > > On Thu, 7 May 2009, Stefano Cislaghi wrote: > > > During a normal switch, operations done are: > > > - oracle shutdown abort > > > - oracle listernet shutdown > > > - umount fs (using umount -l ) Using "umount -l" is just a way to NOT unmount the filesystem, because some process is keeping it busy. All this does is hide the mountpoint until the busy process goes away. Definitely a bad sign that you need this for doing any failover. Try "lsof" to see which process is keeping the mountpoint busy. At minimum these need to be stopped/killed and then do a proper unmount. > > I'm not all too Oracle cluster savvy, but this lazy umount looks > > kinda suspicious. From the manpage: > > > > > Detach the filesystem from the filesystem hierarchy now, and cleanup > > > all references to the filesystem as soon as it is not busy anymore > > > > My wild guess: node1 has been shut down, did a lazy umount, so that > > node2 could mount it but node1 was still writing to the fs (i.e. it was > > still in use)? > > > > Christian. > > -- > > Bruce Schneier's first program was encrypt world. > > > _______________________________________________ > Ext3-users mailing list > Ext3-users at redhat.com > https://www.redhat.com/mailman/listinfo/ext3-users Cheers, Andreas -- Andreas Dilger Sr. Staff Engineer, Lustre Group Sun Microsystems of Canada, Inc. From jarmstrong at postpath.com Tue May 19 16:01:47 2009 From: jarmstrong at postpath.com (Joe Armstrong) Date: Tue, 19 May 2009 09:01:47 -0700 (PDT) Subject: ext3 efficiency, larger vs smaller file system, lots of inodes... Message-ID: <68ACA1A82B5FDD11955600188B42D42B30B4C5@postpath-lx1.mv1.postpath.com> (... to Nabble Ext3:Users - reposted by me after I joined the ext3-users mailing list - sorry for the dup...) A bit of a rambling subject there but I am trying to figure out if it is more efficient at runtime to have few very large file systems (8 TB) vs a larger number of smaller file systems. The file systems will hold many small files. My preference is to have a larger number of smaller file systems for faster recovery and less impact if a problem does occur, but I was wondering if anybody had information from a runtime performance perspective - is there a difference between few large and many small file systems ? Is memory consumption higher for the inode tables if there are more small ones vs one really large one ? Also, does anybody have a reasonable formula for calculating memory requirements of a given file system ? Thanks. Joe From sandeen at redhat.com Tue May 19 16:21:25 2009 From: sandeen at redhat.com (Eric Sandeen) Date: Tue, 19 May 2009 11:21:25 -0500 Subject: ext3 efficiency, larger vs smaller file system, lots of inodes... In-Reply-To: <68ACA1A82B5FDD11955600188B42D42B30B4C5@postpath-lx1.mv1.postpath.com> References: <68ACA1A82B5FDD11955600188B42D42B30B4C5@postpath-lx1.mv1.postpath.com> Message-ID: <4A12DC85.4060601@redhat.com> Joe Armstrong wrote: > (... to Nabble Ext3:Users - reposted by me after I joined the > ext3-users mailing list - sorry for the dup...) > > A bit of a rambling subject there but I am trying to figure out if it > is more efficient at runtime to have few very large file systems (8 > TB) vs a larger number of smaller file systems. The file systems > will hold many small files. > > My preference is to have a larger number of smaller file systems for > faster recovery and less impact if a problem does occur, but I was > wondering if anybody had information from a runtime performance > perspective - is there a difference between few large and many small > file systems ? Is memory consumption higher for the inode tables if > there are more small ones vs one really large one ? It's the vfs that caches dentries & inodes; whether they come from multiple filesystems or one should not change matters significantly. The other downside to multiple smaller filesystems is space management, when you wind up with half of them full and half of them empty, it may be hard to rearrange. But the extra granularity for better availability and fsck/recovery time may be well worth it. It probably depends on what your application is doing and how it can manage the space. You might want to test filling an 8T filesystem and see for yourself how long fsck will take... it'll be a while. Perhaps a very long while. :) > Also, does anybody have a reasonable formula for calculating memory > requirements of a given file system ? Probably the largest memory footprint will be the cached dentries & inodes, though this is a "soft" requirement since it's mostly just cached. Each journal probably has a bit of memory requirement overhead, but I doubt it'll be a significant factor in your decision unless every byte is at a premium... -Eric > Thanks. Joe From jarmstrong at postpath.com Tue May 19 16:28:50 2009 From: jarmstrong at postpath.com (Joe Armstrong) Date: Tue, 19 May 2009 09:28:50 -0700 (PDT) Subject: ext3 efficiency, larger vs smaller file system, lots of inodes... Message-ID: <68ACA1A82B5FDD11955600188B42D42B30B5D2@postpath-lx1.mv1.postpath.com> -----Original Message----- From: Eric Sandeen [mailto:sandeen at redhat.com] Sent: Tuesday, May 19, 2009 9:21 AM To: Joe Armstrong Cc: ext3-users at redhat.com Subject: Re: ext3 efficiency, larger vs smaller file system, lots of inodes... Joe Armstrong wrote: > (... to Nabble Ext3:Users - reposted by me after I joined the > ext3-users mailing list - sorry for the dup...) > > A bit of a rambling subject there but I am trying to figure out if it > is more efficient at runtime to have few very large file systems (8 > TB) vs a larger number of smaller file systems. The file systems > will hold many small files. > > My preference is to have a larger number of smaller file systems for > faster recovery and less impact if a problem does occur, but I was > wondering if anybody had information from a runtime performance > perspective - is there a difference between few large and many small > file systems ? Is memory consumption higher for the inode tables if > there are more small ones vs one really large one ? It's the vfs that caches dentries & inodes; whether they come from multiple filesystems or one should not change matters significantly. The other downside to multiple smaller filesystems is space management, when you wind up with half of them full and half of them empty, it may be hard to rearrange. But the extra granularity for better availability and fsck/recovery time may be well worth it. It probably depends on what your application is doing and how it can manage the space. You might want to test filling an 8T filesystem and see for yourself how long fsck will take... it'll be a while. Perhaps a very long while. :) > Also, does anybody have a reasonable formula for calculating memory > requirements of a given file system ? Probably the largest memory footprint will be the cached dentries & inodes, though this is a "soft" requirement since it's mostly just cached. Each journal probably has a bit of memory requirement overhead, but I doubt it'll be a significant factor in your decision unless every byte is at a premium... -Eric > Thanks. Joe OK, it sounds like it is mostly a space management issue rather than a performance issue. FWIW, the space management issue we were planning on managing via LVM and allocating some medium size volumes to start with and leave lots of spare extents unallocated and then just grow the volume/fs as needed. Thanks. Joe From rwheeler at redhat.com Tue May 19 16:54:26 2009 From: rwheeler at redhat.com (Ric Wheeler) Date: Tue, 19 May 2009 12:54:26 -0400 Subject: ext3 efficiency, larger vs smaller file system, lots of inodes... In-Reply-To: <68ACA1A82B5FDD11955600188B42D42B30B5D2@postpath-lx1.mv1.postpath.com> References: <68ACA1A82B5FDD11955600188B42D42B30B5D2@postpath-lx1.mv1.postpath.com> Message-ID: <4A12E442.2090003@redhat.com> On 05/19/2009 12:28 PM, Joe Armstrong wrote: > > -----Original Message----- > From: Eric Sandeen [mailto:sandeen at redhat.com] > Sent: Tuesday, May 19, 2009 9:21 AM > To: Joe Armstrong > Cc: ext3-users at redhat.com > Subject: Re: ext3 efficiency, larger vs smaller file system, lots of inodes... > > Joe Armstrong wrote: >> (... to Nabble Ext3:Users - reposted by me after I joined the >> ext3-users mailing list - sorry for the dup...) >> >> A bit of a rambling subject there but I am trying to figure out if it >> is more efficient at runtime to have few very large file systems (8 >> TB) vs a larger number of smaller file systems. The file systems >> will hold many small files. >> >> My preference is to have a larger number of smaller file systems for >> faster recovery and less impact if a problem does occur, but I was >> wondering if anybody had information from a runtime performance >> perspective - is there a difference between few large and many small >> file systems ? Is memory consumption higher for the inode tables if >> there are more small ones vs one really large one ? > > It's the vfs that caches dentries& inodes; whether they come from > multiple filesystems or one should not change matters significantly. > > The other downside to multiple smaller filesystems is space management, > when you wind up with half of them full and half of them empty, it may > be hard to rearrange. > > But the extra granularity for better availability and fsck/recovery time > may be well worth it. It probably depends on what your application is > doing and how it can manage the space. You might want to test filling > an 8T filesystem and see for yourself how long fsck will take... it'll > be a while. Perhaps a very long while. :) > >> Also, does anybody have a reasonable formula for calculating memory >> requirements of a given file system ? > > Probably the largest memory footprint will be the cached dentries& > inodes, though this is a "soft" requirement since it's mostly just cached. > > Each journal probably has a bit of memory requirement overhead, but I > doubt it'll be a significant factor in your decision unless every byte > is at a premium... > > -Eric > How you do this also depends on the type of storage you use. If you have multiple file systems on one physical disk (say 2 1TB partitions on a 2TB S-ATA disk), you need to be careful not to bash on both file systems at once since you will thrash the disk heads. In general, it is less of an issue with arrays, but still can have a performance impact. Ric From jarmstrong at postpath.com Tue May 19 17:08:37 2009 From: jarmstrong at postpath.com (Joe Armstrong) Date: Tue, 19 May 2009 10:08:37 -0700 (PDT) Subject: ext3 efficiency, larger vs smaller file system, lots of inodes... Message-ID: <68ACA1A82B5FDD11955600188B42D42B30B706@postpath-lx1.mv1.postpath.com> > -----Original Message----- > From: Ric Wheeler [mailto:rwheeler at redhat.com] > Sent: Tuesday, May 19, 2009 9:54 AM > To: Joe Armstrong > Cc: ext3-users at redhat.com > Subject: Re: ext3 efficiency, larger vs smaller file system, lots of > inodes... > > On 05/19/2009 12:28 PM, Joe Armstrong wrote: > > > > -----Original Message----- > > From: Eric Sandeen [mailto:sandeen at redhat.com] > > Sent: Tuesday, May 19, 2009 9:21 AM > > To: Joe Armstrong > > Cc: ext3-users at redhat.com > > Subject: Re: ext3 efficiency, larger vs smaller file system, lots of > inodes... > > > > Joe Armstrong wrote: > >> (... to Nabble Ext3:Users - reposted by me after I joined the > >> ext3-users mailing list - sorry for the dup...) > >> > >> A bit of a rambling subject there but I am trying to figure out if > it > >> is more efficient at runtime to have few very large file systems (8 > >> TB) vs a larger number of smaller file systems. The file systems > >> will hold many small files. > >> > >> My preference is to have a larger number of smaller file systems for > >> faster recovery and less impact if a problem does occur, but I was > >> wondering if anybody had information from a runtime performance > >> perspective - is there a difference between few large and many > small > >> file systems ? Is memory consumption higher for the inode tables if > >> there are more small ones vs one really large one ? > > > > It's the vfs that caches dentries& inodes; whether they come from > > multiple filesystems or one should not change matters significantly. > > > > The other downside to multiple smaller filesystems is space > management, > > when you wind up with half of them full and half of them empty, it > may > > be hard to rearrange. > > > > But the extra granularity for better availability and fsck/recovery > time > > may be well worth it. It probably depends on what your application > is > > doing and how it can manage the space. You might want to test > filling > > an 8T filesystem and see for yourself how long fsck will take... > it'll > > be a while. Perhaps a very long while. :) > > > >> Also, does anybody have a reasonable formula for calculating memory > >> requirements of a given file system ? > > > > Probably the largest memory footprint will be the cached dentries& > > inodes, though this is a "soft" requirement since it's mostly just > cached. > > > > Each journal probably has a bit of memory requirement overhead, but I > > doubt it'll be a significant factor in your decision unless every > byte > > is at a premium... > > > > -Eric > > > > How you do this also depends on the type of storage you use. If you > have > multiple file systems on one physical disk (say 2 1TB partitions on a > 2TB S-ATA > disk), you need to be careful not to bash on both file systems at once > since you > will thrash the disk heads. > > In general, it is less of an issue with arrays, but still can have a > performance > impact. > > Ric Just for completeness, we will be using Striped LUN's (RAID-6 underneath), so I hope that the striping will distribute the IO's while the RAID-6 device will provide the HA/recovery capabilities. Joe From tytso at mit.edu Tue May 19 17:47:44 2009 From: tytso at mit.edu (Theodore Tso) Date: Tue, 19 May 2009 13:47:44 -0400 Subject: ext3 efficiency, larger vs smaller file system, lots of inodes... In-Reply-To: <68ACA1A82B5FDD11955600188B42D42B30B4C5@postpath-lx1.mv1.postpath.com> References: <68ACA1A82B5FDD11955600188B42D42B30B4C5@postpath-lx1.mv1.postpath.com> Message-ID: <20090519174744.GA9053@mit.edu> On Tue, May 19, 2009 at 09:01:47AM -0700, Joe Armstrong wrote: > > A bit of a rambling subject there but I am trying to figure out if > it is more efficient at runtime to have few very large file systems > (8 TB) vs a larger number of smaller file systems. The file systems > will hold many small files. No, it's not really more efficient to have large filesystems --- efficiency at least in terms of performance, that is. In fact, depending on your workload, it sometimes can be more efficiency to have smaller filesystems, since it the journal is a single choke-point if you have a fsync-heavy workload. Other advantages of smaller filesystems is that it's faster to fsck a particular filesystem. The disadvantage of breaking up a large filesystem are the obvious ones; you have less flexibility about space allocation, and you can't hard link across different filesystems, which can be a big deal for some folks. > Is memory consumption higher for the inode tables if > there are more small ones vs one really large one ? No, because we don't keep a entire filesystem inode table in memory; pieces of it are brought in as needed, and when they aren't needed they are released from memory. About the only thing which is permanently pinned into memory are the block group descriptors, which take up 32 bytes per block group descriptor, where a block group descriptor represents 32 megabytes of storage on disk. So 1 GB of filesystem will require 1k of space, and a 1TB filesystem will require 1 megabyte of memory in terms of block group descriptors. There are some other overheads, but most of them are fixed overheads, and normally not a problem. The struct superblock data structure a kilobyte or so, for example. The buffer heads for the block group descriptors are 56 bytes per 4k of block group descriptors, so 1 megabytes of block grouptors also requires 14k of buffer heads. Unless you're creating some kind of embedded NAS system, I doubt memory consuption will be a major problem for you. - Ted From giuseppe at eppesuigoccas.homedns.org Wed May 20 12:02:43 2009 From: giuseppe at eppesuigoccas.homedns.org (Giuseppe Sacco) Date: Wed, 20 May 2009 14:02:43 +0200 Subject: cannot mount ext3 boot partition as r/w since 2.6.30 Message-ID: <1242820963.13252.16.camel@scarafaggio> Hi all, I am testing new kernel on a mips machine (64 bits for kernel, 32 bits userland) and I found a problem when mounting the root file system. It is an ext3 file system that is correctly mounted as read only. While booting the system remount the file system as read/write and keep starting all daemons. Moving from 2.6.26 to 2.6.30 kernel, I get this error while remounting the file system read/write: EXT3-fs: cannot change data mode on remount My /etc/fstab contains this line: /dev/sda1 / ext3 rw,errors=continue,data=ordered,relatime 0 1 According to mount manuale page, "data=ordered" should be the default value and should be harmless. Removing option "data=ordered" fix the boot process. So, this is my question: is this a standard behaviour? is "data=ordered" no more a default value? What should people do when upgrading their system to newer kernels? Thanks, Giuseppe From tytso at mit.edu Wed May 20 14:23:25 2009 From: tytso at mit.edu (Theodore Tso) Date: Wed, 20 May 2009 10:23:25 -0400 Subject: cannot mount ext3 boot partition as r/w since 2.6.30 In-Reply-To: <1242820963.13252.16.camel@scarafaggio> References: <1242820963.13252.16.camel@scarafaggio> Message-ID: <20090520142325.GE24836@mit.edu> On Wed, May 20, 2009 at 02:02:43PM +0200, Giuseppe Sacco wrote: > I am testing new kernel on a mips machine (64 bits for kernel, 32 bits > userland) and I found a problem when mounting the root file system. It > is an ext3 file system that is correctly mounted as read only. While > booting the system remount the file system as read/write and keep > starting all daemons. > > Moving from 2.6.26 to 2.6.30 kernel, I get this error while remounting > the file system read/write: > EXT3-fs: cannot change data mode on remount > > My /etc/fstab contains this line: > /dev/sda1 / ext3 rw,errors=continue,data=ordered,relatime 0 1 > > According to mount manuale page, "data=ordered" should be the default > value and should be harmless. > Removing option "data=ordered" fix the boot process. > > So, this is my question: is this a standard behaviour? is "data=ordered" > no more a default value? What should people do when upgrading their > system to newer kernels? The default mount option is now data=writeback (although this is configurable via a compile-time CONFIG option). In general you don't have to specify the data= mount option when you are remounting the filesystem read/write. If you really want to use data=ordered, you can either toggle the CONFIG option at compile time, or use the rootflags= boot command-line option to set the mount options to be used when originally mounting the root filesystem. Regards, - Ted From niko at petole.demisel.net Thu May 21 07:54:19 2009 From: niko at petole.demisel.net (Nicolas KOWALSKI) Date: Thu, 21 May 2009 09:54:19 +0200 Subject: cannot mount ext3 boot partition as r/w since 2.6.30 In-Reply-To: <20090520142325.GE24836@mit.edu> References: <1242820963.13252.16.camel@scarafaggio> <20090520142325.GE24836@mit.edu> Message-ID: <87iqju51sk.fsf@petole.demisel.net> Theodore Tso writes: > The default mount option is now data=writeback (although this is > configurable via a compile-time CONFIG option). In general you don't > have to specify the data= mount option when you are remounting the > filesystem read/write. If you really want to use data=ordered, you > can either toggle the CONFIG option at compile time, or use the > rootflags= boot command-line option to set the mount options to be > used when originally mounting the root filesystem. Is the tune2fs -o journal_data_ordered option still available for specifying journalling mode? BTW, is there is link/page somewhere explaining why the ordered mode is now being deprecated? Thanks, -- Nicolas From lists at nerdbynature.de Thu May 21 10:21:41 2009 From: lists at nerdbynature.de (Christian Kujau) Date: Thu, 21 May 2009 03:21:41 -0700 (PDT) Subject: cannot mount ext3 boot partition as r/w since 2.6.30 In-Reply-To: <87iqju51sk.fsf@petole.demisel.net> References: <1242820963.13252.16.camel@scarafaggio> <20090520142325.GE24836@mit.edu> <87iqju51sk.fsf@petole.demisel.net> Message-ID: On Thu, 21 May 2009, Nicolas KOWALSKI wrote: > Is the tune2fs -o journal_data_ordered option still available for > specifying journalling mode? This option is still present in e2fsprogs and can be used for ext3 and ext4 alike. As I understand it, the data ordering mode only defaults to "data=writeback" but can be changed via bootoptions or tune2fs. > BTW, is there is link/page somewhere explaining why the ordered mode > is now being deprecated? There were quite a few discussion on the ext4 list and on lkml too and eventually it had been decided to default to data=writeback: http://git.kernel.org/?p=linux/kernel/git/tytso/ext4.git;a=commit;h=bbae8bcc49bc4d002221dab52c79a50a82e7cd1f C. -- When Bruce Schneier reads from his entropy pool the universe contracts From niko at petole.demisel.net Thu May 21 17:22:27 2009 From: niko at petole.demisel.net (Nicolas KOWALSKI) Date: Thu, 21 May 2009 19:22:27 +0200 Subject: cannot mount ext3 boot partition as r/w since 2.6.30 In-Reply-To: References: <1242820963.13252.16.camel@scarafaggio> <20090520142325.GE24836@mit.edu> <87iqju51sk.fsf@petole.demisel.net> Message-ID: <87bppm4bho.fsf@petole.demisel.net> Christian Kujau writes: > On Thu, 21 May 2009, Nicolas KOWALSKI wrote: >> Is the tune2fs -o journal_data_ordered option still available for >> specifying journalling mode? > > This option is still present in e2fsprogs and can be used for ext3 and > ext4 alike. As I understand it, the data ordering mode only defaults to > "data=writeback" but can be changed via bootoptions or tune2fs. Ok. >> BTW, is there is link/page somewhere explaining why the ordered mode >> is now being deprecated? > > There were quite a few discussion on the ext4 list and on lkml too and > eventually it had been decided to default to data=writeback: > > http://git.kernel.org/?p=linux/kernel/git/tytso/ext4.git;a=commit;h=bbae8bcc49bc4d002221dab52c79a50a82e7cd1f Thanks for the link and your reply, -- Nicolas From lakshmipathi.g at gmail.com Sat May 30 07:11:38 2009 From: lakshmipathi.g at gmail.com (lakshmi pathi) Date: Sat, 30 May 2009 12:41:38 +0530 Subject: a question on mount count and maximum mount count Message-ID: Hi, If I need to know ,how many times the system has been rebooted , Shall I use mount count value (tune2fs -l )? >From below it says, warning message will be displayed when it equals the maximum mount count . What happens after that ,is mount count value reset back to 0 ? Is there any command available to check how many times system has been rebooted? "Mount Count and Maximum Mount Count Together these allow the system to determine if the file system should be fully checked. The mount count is incremented each time the file system is mounted and when it equals the maximum mount count the warning message ``maximal mount count reached, running e2fsck is recommended'' is displayed" -- Cheers, Lakshmipathi.G From lists at nerdbynature.de Sat May 30 17:15:01 2009 From: lists at nerdbynature.de (Christian Kujau) Date: Sat, 30 May 2009 10:15:01 -0700 (PDT) Subject: a question on mount count and maximum mount count In-Reply-To: References: Message-ID: On Sat, 30 May 2009, lakshmi pathi wrote: > If I need to know ,how many times the system has been rebooted , Shall > I use mount count value (tune2fs -l )? Well, you *could* use this command, at least for the root filesystem, as this usually only gets mounted during boot, but: > What happens after that ,is mount count value reset back to 0 ? Yes, the counter is reset, so a batter way to find out how many times a system has been rebooted would be the last(1) command. Use "last reboot" to find out about the system's reboots. Christian. -- BOFH excuse #245: The Borg tried to assimilate your system. Resistance is futile. From bruno at wolff.to Sat May 30 17:04:09 2009 From: bruno at wolff.to (Bruno Wolff III) Date: Sat, 30 May 2009 12:04:09 -0500 Subject: a question on mount count and maximum mount count In-Reply-To: References: Message-ID: <20090530170409.GF1898@wolff.to> On Sat, May 30, 2009 at 12:41:38 +0530, lakshmi pathi wrote: > Hi, > If I need to know ,how many times the system has been rebooted , Shall This is a case of where knowing what you are really trying to do would be useful. Is this supposed to be some sort of canary to detect intrusions, used to display a vanity number of reboots somewhere or what? From lakshmipathi.g at gmail.com Sat May 30 18:26:58 2009 From: lakshmipathi.g at gmail.com (lakshmi pathi) Date: Sat, 30 May 2009 23:56:58 +0530 Subject: a question on mount count and maximum mount count In-Reply-To: <20090530170409.GF1898@wolff.to> References: <20090530170409.GF1898@wolff.to> Message-ID: This is a case of where knowing what you are really trying to do would be useful sorry :) I came across this interesting question in Linux forum , I thought about digging "/var/log/messages" and grep "Freeing initrd memory" or some unique message might give the required answer. I posted the question here, because I wanted the professional answer :) Thanks guyz, today I learned by last command.But how reliable "last reboot" is ? I quickly checked the man page it said /var/log/wtmp used by last command.- what will happen if the log file modified/erased or archived. -- Cheers, Lakshmipathi.G On Sat, May 30, 2009 at 10:34 PM, Bruno Wolff III wrote: > On Sat, May 30, 2009 at 12:41:38 +0530, > lakshmi pathi wrote: >> Hi, >> If I need to know ,how many times the system has been rebooted , Shall > > This is a case of where knowing what you are really trying to do would > be useful. Is this supposed to be some sort of canary to detect intrusions, > used to display a vanity number of reboots somewhere or what? > From jelledejong at powercraft.nl Sun May 31 08:31:09 2009 From: jelledejong at powercraft.nl (Jelle de Jong) Date: Sun, 31 May 2009 10:31:09 +0200 Subject: a question on mount count and maximum mount count In-Reply-To: References: <20090530170409.GF1898@wolff.to> Message-ID: <4A22404D.9080600@powercraft.nl> lakshmi pathi wrote: > This is a case of where knowing what you are really trying to do would > be useful > > sorry :) I came across this interesting question in Linux forum , I > thought about digging "/var/log/messages" and grep > "Freeing initrd memory" or some unique message might give the required answer. > I posted the question here, because I wanted the professional answer :) > > Thanks guyz, today I learned by last command.But how reliable "last > reboot" is ? I quickly checked the man page it said /var/log/wtmp > used by last command.- what will happen if the log file > modified/erased or archived. You can also use de S.M.A.R.T. statistics of the hard drive, it will not show you the reboots but it will show the spin ups of the hard drive so power off power on, you can use this with the last reboot command and get a good indication. Best regards, Jelle de Jong -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: smartctl.txt URL: From lists at nerdbynature.de Sun May 31 11:11:30 2009 From: lists at nerdbynature.de (Christian Kujau) Date: Sun, 31 May 2009 04:11:30 -0700 (PDT) Subject: a question on mount count and maximum mount count [OT] In-Reply-To: References: <20090530170409.GF1898@wolff.to> Message-ID: On Sat, 30 May 2009, lakshmi pathi wrote: > Thanks guyz, today I learned by last command.But how reliable "last > reboot" is ? I quickly checked the man page it said /var/log/wtmp > used by last command.- what will happen if the log file > modified/erased or archived. You'd need some tamper-proof setup to record all reboots (and better yet, all other system activity). This usually means some kind of hardware, like a blackbox used in airplanes. A less paranoid version of this could be a remote syslog server, recording reboots even if a local logfile has been modified/erased. C. -- BOFH excuse #413: Cow-tippers tipped a cow onto the server. From d_baron at 012.net.il Sun May 31 18:07:05 2009 From: d_baron at 012.net.il (David Baron) Date: Sun, 31 May 2009 21:07:05 +0300 Subject: -o extents (ext4 capabilities for newly-created files) Message-ID: <200905312107.06449.d_baron@012.net.il> Is it desirable to use this to make current ext3 filesystems hybrid ext4 systems? Is there any advantage to extents for numerous, non-huge, more normal-sized files. Stability? From darkonc at gmail.com Sun May 31 23:36:44 2009 From: darkonc at gmail.com (Stephen Samuel (gmail)) Date: Sun, 31 May 2009 16:36:44 -0700 Subject: a question on mount count and maximum mount count In-Reply-To: References: Message-ID: <6cd50f9f0905311636x75bfd332o93a07e004b02037b@mail.gmail.com> The mount count can give you a good idea of how many times the system has been rebooted. It's probably a better way of figuring that out than looking at the output of 'last reboot'. Thing is that, in either case, the count can get reset, so you need a way of determining when that has happened. For the mount count of / , it gets reset whenever you do an fsck (usually at boot time) When that happens, then you know that the system has been rebooted 'at least once' since the last time you looked. (the current mount count would be the probable count of the number of times the system has been rebooted). Note that, if someone does, for example, a CDROM boot and mounts the normal root filesystem, there would be no real way to distinguish that from a boot. Similarly, if someone does multiple such mounts and then does an FSCK, you would see that as only one 'boot'. wtmp (used for 'last') is good as far as it goes, but the file is cycled from time to time, so you need to keep track of the most recent boot time the last time you checked, and only count more recent boots. If someone gains root access, they can mess with the file, but if an attacker gets root access they can change pretty much anything that you're dependant on, anyways.. (i.e. you're hooped at that point if you've got a malicious root process).- Show quoted text - On Sat, May 30, 2009 at 12:11 AM, lakshmi pathi wrote: > Hi, > If I need to know ,how many times the system has been rebooted , Shall > I use mount count value (tune2fs -l )? > > >From below it says, warning message will be displayed when it equals > the maximum mount count . > What happens after that ,is mount count value reset back to 0 ? > -- Stephen Samuel http://www.bcgreen.com Software, like love, 778-861-7641 grows when you give it away -------------- next part -------------- An HTML attachment was scrubbed... URL: From Sean.D.McCauliff at nasa.gov Thu May 21 17:30:46 2009 From: Sean.D.McCauliff at nasa.gov (Sean McCauliff) Date: Thu, 21 May 2009 17:30:46 -0000 Subject: ext4-users mailing list? Message-ID: <4A158FA6.5050702@nasa.gov> I'm going to be making the transition to ext4 in the next few months. Is there an ext4-users mailing list? Thanks! Sean