From mammar at gmail.com Tue Oct 5 12:12:45 2010 From: mammar at gmail.com (Muhammad Ammar) Date: Tue, 5 Oct 2010 17:12:45 +0500 Subject: EXT3 Reserve Space Message-ID: Hi All, Whenever an EXT3 partition is created some space is reserved for super-user, I used the mkfs.ext3 with option -m set to 0, but there is no effect it still reserve the space. How can i set the reserved-space to 0 or calculate the reserved-space in advance? Any suggestion/idea? Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From lakshmipathi.g at gmail.com Tue Oct 5 13:00:14 2010 From: lakshmipathi.g at gmail.com (Lakshmipathi.G) Date: Tue, 5 Oct 2010 18:30:14 +0530 Subject: EXT3 Reserve Space In-Reply-To: References: Message-ID: Hi, Are you sure -m option is not working with mkfs.ext3? Can you verify it using tune2fs ? If 5% is reserved already, you can use "tune2fs -m 0 device" to modify it and check Reserved blocks count using "tune2fs -l device" command. HTH -- ---- Cheers, Lakshmipathi.G FOSS Programmer. www.giis.co.in On Tue, Oct 5, 2010 at 5:42 PM, Muhammad Ammar wrote: > Hi All, > > > Whenever an EXT3 partition is created some space is reserved for > super-user, I used the mkfs.ext3 with option -m set to 0, but there is no > effect it still reserve the space. How can i set the reserved-space to 0 or > calculate the reserved-space in advance? > > Any suggestion/idea? > > > Regards, > > _______________________________________________ > Ext3-users mailing list > Ext3-users at redhat.com > https://www.redhat.com/mailman/listinfo/ext3-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lakshmipathi.g at gmail.com Tue Oct 5 13:00:14 2010 From: lakshmipathi.g at gmail.com (Lakshmipathi.G) Date: Tue, 5 Oct 2010 18:30:14 +0530 Subject: EXT3 Reserve Space In-Reply-To: References: Message-ID: Hi, Are you sure -m option is not working with mkfs.ext3? Can you verify it using tune2fs ? If 5% is reserved already, you can use "tune2fs -m 0 device" to modify it and check Reserved blocks count using "tune2fs -l device" command. HTH -- ---- Cheers, Lakshmipathi.G FOSS Programmer. www.giis.co.in On Tue, Oct 5, 2010 at 5:42 PM, Muhammad Ammar wrote: > Hi All, > > > Whenever an EXT3 partition is created some space is reserved for > super-user, I used the mkfs.ext3 with option -m set to 0, but there is no > effect it still reserve the space. How can i set the reserved-space to 0 or > calculate the reserved-space in advance? > > Any suggestion/idea? > > > Regards, > > _______________________________________________ > Ext3-users mailing list > Ext3-users at redhat.com > https://www.redhat.com/mailman/listinfo/ext3-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandeen at redhat.com Tue Oct 5 13:43:11 2010 From: sandeen at redhat.com (Eric Sandeen) Date: Tue, 05 Oct 2010 08:43:11 -0500 Subject: EXT3 Reserve Space In-Reply-To: References: Message-ID: <4CAB2B6F.3090302@redhat.com> Muhammad Ammar wrote: > Hi All, > > > Whenever an EXT3 partition is created some space is reserved for > super-user, I used the mkfs.ext3 with option -m set to 0, but there is > no effect it still reserve the space. How can i set the reserved-space > to 0 or calculate the reserved-space in advance? > > Any suggestion/idea? > Please let us know the version of e2fsprogs you are using, and then show the commands you used which exhibited this problem. Thanks, -Eric From mammar at gmail.com Tue Oct 5 15:08:00 2010 From: mammar at gmail.com (Muhammad Ammar) Date: Tue, 5 Oct 2010 20:08:00 +0500 Subject: EXT3 Reserve Space In-Reply-To: References: Message-ID: Hi, Yes, I checked it with both(mkfs.ext3 and tune3fs) on multiple systems but no effect. I also checked -m with multiple values(0, 1) but no effect I am using Fedora 13. Regards, On Tue, Oct 5, 2010 at 6:00 PM, Lakshmipathi.G wrote: > Hi, > Are you sure -m option is not working with mkfs.ext3? Can you verify it > using tune2fs ? > If 5% is reserved already, you can use "tune2fs -m 0 device" to modify it > and check Reserved blocks count using "tune2fs -l device" command. > > HTH > > -- > ---- > Cheers, > Lakshmipathi.G > FOSS Programmer. > www.giis.co.in > > > On Tue, Oct 5, 2010 at 5:42 PM, Muhammad Ammar wrote: > >> Hi All, >> >> >> Whenever an EXT3 partition is created some space is reserved for >> super-user, I used the mkfs.ext3 with option -m set to 0, but there is no >> effect it still reserve the space. How can i set the reserved-space to 0 or >> calculate the reserved-space in advance? >> >> Any suggestion/idea? >> >> >> Regards, >> >> _______________________________________________ >> Ext3-users mailing list >> Ext3-users at redhat.com >> https://www.redhat.com/mailman/listinfo/ext3-users >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mammar at gmail.com Tue Oct 5 15:30:42 2010 From: mammar at gmail.com (Muhammad Ammar) Date: Tue, 5 Oct 2010 20:30:42 +0500 Subject: EXT3 Reserve Space In-Reply-To: <4CAB2B6F.3090302@redhat.com> References: <4CAB2B6F.3090302@redhat.com> Message-ID: Hi, I think problem is solved. Actually i am checking the sizes in gparted and it is showing wrong value for 'USED' field. Now i check the sizes using df and it shows correct sizes. The version of e2fsprogs is: e2fsprogs-1.41.10-6.fc13.i686 I used the following command to create an ext3 file system mkfs.ext3 -m 0 /dev/sda2 Thanks to all of you for your time and sorry for confusion. Regards, On Tue, Oct 5, 2010 at 6:43 PM, Eric Sandeen wrote: > Muhammad Ammar wrote: > > Hi All, > > > > > > Whenever an EXT3 partition is created some space is reserved for > > super-user, I used the mkfs.ext3 with option -m set to 0, but there is > > no effect it still reserve the space. How can i set the reserved-space > > to 0 or calculate the reserved-space in advance? > > > > Any suggestion/idea? > > > > Please let us know the version of e2fsprogs you are using, and then > show the commands you used which exhibited this problem. > > Thanks, > -Eric > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ralf-Lists at ralfgross.de Thu Oct 7 12:44:34 2010 From: Ralf-Lists at ralfgross.de (Ralf Gross) Date: Thu, 7 Oct 2010 14:44:34 +0200 Subject: file open -> disk full -> save -> file 0 byte Message-ID: <20101007124434.GP23326@pirx.askja.de> Hi, a user had a file open when the disk ran full. He then saved the file and now it's size is 0 byte (ext3). I don't know much more about this, but he asked me if there is any chance to get the data of this file back? Ralf From sandeen at redhat.com Thu Oct 7 13:41:21 2010 From: sandeen at redhat.com (Eric Sandeen) Date: Thu, 07 Oct 2010 08:41:21 -0500 Subject: file open -> disk full -> save -> file 0 byte In-Reply-To: <20101007124434.GP23326@pirx.askja.de> References: <20101007124434.GP23326@pirx.askja.de> Message-ID: <4CADCE01.809@redhat.com> Ralf Gross wrote: > Hi, > > a user had a file open when the disk ran full. He then saved the file > and now it's size is 0 byte (ext3). I don't know much more about this, > but he asked me if there is any chance to get the data of this file > back? I'm not sure how that happens; writes to the file should have hit ENOSPC; ext3 doesn't even have delalloc to worry about so. Did the application check the write return value? (or maybe it was mmap writes, and since ext3 has no pg_mkwrite, it'd just get lost, unfortunately...) -Eric > Ralf From Ralf-Lists at ralfgross.de Thu Oct 7 13:52:09 2010 From: Ralf-Lists at ralfgross.de (Ralf Gross) Date: Thu, 7 Oct 2010 15:52:09 +0200 Subject: file open -> disk full -> save -> file 0 byte In-Reply-To: <20101007124434.GP23326@pirx.askja.de> References: <20101007124434.GP23326@pirx.askja.de> Message-ID: <20101007135209.GQ23326@pirx.askja.de> Ralf Gross schrieb: > Hi, > > a user had a file open when the disk ran full. He then saved the file > and now it's size is 0 byte (ext3). I don't know much more about this, > but he asked me if there is any chance to get the data of this file > back? ext3grep /dev/sda6 --restore-file path/to/file restored only the 0 byte version but I found something with ext3grep. The user remembered that the string "static void Associate_cluster" is part of the file. ~ # ext3grep /dev/sda6 --search "static void Associate_cluster" Running ext3grep version 0.10.1 Number of groups: 53 Minimum / maximum journal block: 932 / 34660 Loading journal descriptors... sorting... done The oldest inode block that is still in the journal, appears to be from 1286405586 = Thu Oct 7 00:53:06 2010 Number of descriptors in journal: 24920; min / max sequence numbers: 63706 / 72291 Blocks containing "static void Associate_cluster": 325515 (allocated) 904535 915577 1428545 I can get some further output with 'ext3grep /dev/sda6 --block 325515' ~ # ext3grep /dev/sda6 --block 325515 Running ext3grep version 0.10.1 No --ls used; implying --print. Number of groups: 53 Minimum / maximum journal block: 932 / 34660 Loading journal descriptors... sorting... done The oldest inode block that is still in the journal, appears to be from 1286405586 = Thu Oct 7 00:53:06 2010 Number of descriptors in journal: 24920; min / max sequence numbers: 63706 / 72291 Hex dump of block 325515: 0000 | 61 6e 65 4f 66 66 73 65 74 3b 0a 20 20 20 20 73 | aneOffset;. s 0010 | 70 75 72 5f 70 6f 6c 79 5f 6d 65 73 73 2e 63 30 | pur_poly_mess.c0 [....] 0fd0 | 5f 48 6f 73 74 49 66 5f 74 20 2a 68 6f 73 74 49 | _HostIf_t *hostI 0fe0 | 66 2c 20 64 6f 75 62 6c 65 20 2a 56 61 6c 75 65 | f, double *Value 0ff0 | 4c 69 73 74 2c 20 69 6e 74 20 2a 56 61 6c 75 65 | List, int *Value ~ # ext3grep /dev/sda6 --search-inode 325515 Running ext3grep version 0.10.1 Number of groups: 53 Minimum / maximum journal block: 932 / 34660 Loading journal descriptors... sorting... done The oldest inode block that is still in the journal, appears to be from 1286405586 = Thu Oct 7 00:53:06 2010 Number of descriptors in journal: 24920; min / max sequence numbers: 63706 / 72291 Inodes refering to block 325515: 145601 ~ # ext3grep /dev/sda6 --inode 145601 Running ext3grep version 0.10.1 No --ls used; implying --print. Number of groups: 53 Minimum / maximum journal block: 932 / 34660 Loading journal descriptors... sorting... done The oldest inode block that is still in the journal, appears to be from 1286405586 = Thu Oct 7 00:53:06 2010 Number of descriptors in journal: 24920; min / max sequence numbers: 63706 / 72291 Hex dump of inode 145601: 0000 | ed 81 e8 03 2b ae 02 00 61 69 9b 4c ee c2 ad 4c | ....+...ai.L...L 0010 | 0e a8 7e 49 00 00 00 00 e8 03 01 00 60 01 00 00 | ..~I........`... 0020 | 00 00 00 00 00 00 00 00 77 f7 04 00 78 f7 04 00 | ........w...x... 0030 | 79 f7 04 00 7a f7 04 00 7b f7 04 00 7c f7 04 00 | y...z...{...|... 0040 | 7d f7 04 00 7e f7 04 00 7f f7 04 00 80 f7 04 00 | }...~........... 0050 | 81 f7 04 00 82 f7 04 00 83 f7 04 00 00 00 00 00 | ................ 0060 | 00 00 00 00 f2 97 92 a7 00 00 00 00 00 00 00 00 | ................ 0070 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................ Inode is Allocated Group: 9 Generation Id: 2811402226 uid / gid: 1000 / 1000 mode: rrwxr-xr-x size: 175659 num of links: 1 sectors: 352 (--> 1 indirect block). Inode Times: Accessed: 1285253473 = Thu Sep 23 16:51:13 2010 File Modified: 1286456046 = Thu Oct 7 14:54:06 2010 Inode Modified: 1233037326 = Tue Jan 27 07:22:06 2009 Deletion time: 0 Direct Blocks: 325495 325496 325497 325498 325499 325500 325501 325502 325503 325504 325505 325506 Indirect Block: 325507 So I know that there is something left of the file, but I don't know how to get it back. Ralf From vel.indira at gmail.com Fri Oct 8 07:02:05 2010 From: vel.indira at gmail.com (Indira ramasamy) Date: Fri, 8 Oct 2010 02:02:05 -0500 Subject: No subject Message-ID: http://www.jtmpower.ie/mas5.html From bothie at gmx.de Fri Oct 8 13:10:41 2010 From: bothie at gmx.de (Bodo Thiesen) Date: Fri, 8 Oct 2010 15:10:41 +0200 Subject: file open -> disk full -> save -> file 0 byte In-Reply-To: <20101007135209.GQ23326@pirx.askja.de> References: <20101007124434.GP23326@pirx.askja.de> <20101007135209.GQ23326@pirx.askja.de> Message-ID: <20101008151041.6a5c7b04@gmx.de> * Ralf Gross hat geschrieben: > ~ # ext3grep /dev/sda6 --inode 145601 > size: 175659 > sectors: 352 (--> 1 indirect block). > Direct Blocks: 325495 325496 325497 325498 325499 325500 325501 325502 325503 325504 325505 325506 > Indirect Block: 325507 > > So I know that there is something left of the file, but I don't know how to get > it back. *** WARNING *** The following code snippet is meant to explain what you could do. Please don't stop using your brain. ;) *** BEGIN SNIPPET *** #! /bin/sh DEV=/dev/sda6 BS=4096 # This may be 2048 or 1024 - whatever cluster size your ext2 # file system uses # Recover the first 12 clusters (the direct clusters) dd if=$DEV bs=$BS of=/ramfs/restored.data skip=325495 count=12 # Get the indirect cluster dd if=$DEV bs=$BS of=/ramfs/restored.ind skip=325507 count=1 # And dump it's content decimally ... hexdump -e '4/4 "%10i " "\n"' /ramfs/restored.ind # you should get an output like # 325508 325509 325510 325511 # 325512 [...] # Check, that the numbers are one bigger than the previous ones. # Recover the following parts of the file (assuming, that the first # number is the 325508 and that there are 5 countiguous numbers. # The 12 comes from the previous skip argument dd if=$DEV bs=$BS of=/ramfs/restored.data skip=325508 seek=12 count=5 # If there is a jump in the numbers printed by hexdump, continue with # the next cluster chain (17 = 12 + 5 - it's just the sum of clustes # already written to the file): dd if=$DEV bs=$BS of=/ramfs/restored.data skip=$whatever_number_comes_now seek=17 count=$length_of_chain # Repeat the last step until you are done. *** END SNIPPET *** After you are done, check the file and then copy it over to the file system so your user can continue to work on it again. And tell that user that he should stop using the application he was using all together. Overwriting a file with updated content is not state of the art for at least two decades. The old file content has to be saved in a backup file first or the old file could just be renamed. Every software I use does it either way. This way your user wouldn't have had this problem in the first place (just take the backup file and throw away the last 20 minutes of work - recovery takes longer anyways ...). Alternatively: Think about a proper daily (or even hourly) backup plan. Regards, Bodo From samuel at bcgreen.com Fri Oct 8 20:34:19 2010 From: samuel at bcgreen.com (Stephen Samuel) Date: Fri, 8 Oct 2010 13:34:19 -0700 Subject: file open -> disk full -> save -> file 0 byte In-Reply-To: <20101008151041.6a5c7b04@gmx.de> References: <20101007124434.GP23326@pirx.askja.de> <20101007135209.GQ23326@pirx.askja.de> <20101008151041.6a5c7b04@gmx.de> Message-ID: a slightly easier way of going through the indirect block... recovered=12 for i in `hexdump -e '4/4 "%10i " "\n"' /ramfs/restored.ind` ; do if [[ "$i" -ne 0 ]] ; then dd if=$DEV bs=$BS of=/ramfs/restored.ind skip=$i seek=$((recovered++)) count=1 fi done However, if the inode in question still exists, then I'd be inclined to suggest that you mount the filesystem (readonly preferably), and then hunt for the inode.... let the filesystem do the heavy lifting for you. find /mount/recovered -inum 145601 -print or, even better yet: cp ` find /mount/recovered -inum 145601 -print` recovered-file On Fri, Oct 8, 2010 at 6:10 AM, Bodo Thiesen wrote: > * Ralf Gross hat geschrieben: > > > ~ # ext3grep /dev/sda6 --inode 145601 > > size: 175659 > > sectors: 352 (--> 1 indirect block). > > Direct Blocks: 325495 325496 325497 325498 325499 325500 325501 325502 > 325503 325504 325505 325506 > > Indirect Block: 325507 > > > > So I know that there is something left of the file, but I don't know how > to get > > it back. > > *** WARNING *** The following code snippet is meant to explain what you > could do. Please don't stop using your brain. ;) > > *** BEGIN SNIPPET *** > > #! /bin/sh > > DEV=/dev/sda6 > BS=4096 > # This may be 2048 or 1024 - whatever cluster size your ext2 > # file system uses > > # Recover the first 12 clusters (the direct clusters) > dd if=$DEV bs=$BS of=/ramfs/restored.data skip=325495 count=12 > > # Get the indirect cluster > dd if=$DEV bs=$BS of=/ramfs/restored.ind skip=325507 count=1 > > # And dump it's content decimally ... > hexdump -e '4/4 "%10i " "\n"' /ramfs/restored.ind > # you should get an output like > # 325508 325509 325510 325511 > # 325512 [...] > # Check, that the numbers are one bigger than the previous ones. > > # Recover the following parts of the file (assuming, that the first > # number is the 325508 and that there are 5 countiguous numbers. > # The 12 comes from the previous skip argument > dd if=$DEV bs=$BS of=/ramfs/restored.data skip=325508 seek=12 count=5 > > # If there is a jump in the numbers printed by hexdump, continue with > # the next cluster chain (17 = 12 + 5 - it's just the sum of clustes > # already written to the file): > dd if=$DEV bs=$BS of=/ramfs/restored.data > skip=$whatever_number_comes_now seek=17 count=$length_of_chain > > # Repeat the last step until you are done. > > *** END SNIPPET *** > > After you are done, check the file and then copy it over to the file > system so your user can continue to work on it again. And tell that user > that he should stop using the application he was using all together. > Overwriting a file with updated content is not state of the art for at > least two decades. The old file content has to be saved in a backup file > first or the old file could just be renamed. Every software I use does it > either way. This way your user wouldn't have had this problem in the first > place (just take the backup file and throw away the last 20 minutes of > work - recovery takes longer anyways ...). Alternatively: Think about a > proper daily (or even hourly) backup plan. > > Regards, Bodo > > _______________________________________________ > Ext3-users mailing list > Ext3-users at redhat.com > https://www.redhat.com/mailman/listinfo/ext3-users > -- Stephen Samuel http://www.bcgreen.com Software, like love, 778-861-7641 grows when you give it away -------------- next part -------------- An HTML attachment was scrubbed... URL: From bothie at gmx.de Fri Oct 8 22:28:27 2010 From: bothie at gmx.de (Bodo Thiesen) Date: Sat, 9 Oct 2010 00:28:27 +0200 Subject: file open -> disk full -> save -> file 0 byte In-Reply-To: References: <20101007124434.GP23326@pirx.askja.de> <20101007135209.GQ23326@pirx.askja.de> <20101008151041.6a5c7b04@gmx.de> Message-ID: <20101009002827.0488c58b@gmx.de> * Stephen Samuel hat geschrieben: > a slightly easier way of going through the indirect block... > recovered=12 > for i in `hexdump -e '4/4 "%10i " "\n"' /ramfs/restored.ind` ; do > if [[ "$i" -ne 0 ]] ; then > dd if=$DEV bs=$BS of=/ramfs/restored.ind skip=$i > seek=$((recovered++)) count=1 > fi > done ;) > However, if the inode in question still exists, No it doesn't. Ralf used a tool called ext3grep which greps through the journal to find old versions of the data in question. > then I'd be inclined to suggest that you mount the filesystem > (readonly preferably), As to my knowledge, it is still impossible to mount an ext2 file system with the needs_recovery flag read only with the ext3 driver and because that flag is wrongly made "incompatible", it's even impossible to mount it with the ext2 driver. Please do NEVER AGAIN suggest to anyone to mount -o ro an ext2 filesystem having a journal if he has troubles with that file system. Regards, Bodo From invite+hig6cfze at facebookmail.com Mon Oct 11 17:48:41 2010 From: invite+hig6cfze at facebookmail.com (Viji V Nair) Date: Mon, 11 Oct 2010 10:48:41 -0700 Subject: Check out my photos on Facebook Message-ID: <4f5733fcf48623a816e1a369359265ab@www.facebook.com> Hi Linux, I set up a Facebook profile where I can post my pictures, videos and events and I want to add you as a friend so you can see it. First, you need to join Facebook! Once you join, you can also create your own profile. Thanks, Viji To sign up for Facebook, follow the link below: http://www.facebook.com/p.php?i=100000510939850&k=Z6E3Y5RZUW4NWGDJPB62Y3T2S3IB4UYLU31XA&r Already have an account? Add this email address to your account: http://www.facebook.com/n/?merge_accounts.php&e=ext3-users%40redhat.com&c=ac4f3be88819bda741e888bfa4c01cc4 ======================================= ext3-users at redhat.com was invited to join Facebook by Viji V Nair. If you do not wish to receive this type of email from Facebook in the future, please follow the link below to unsubscribe. http://www.facebook.com/o.php?k=50d254&u=1007885809&mid=31d4b26G3c131df1G0G8 Facebook, Inc. P.O. Box 10005, Palo Alto, CA 94303 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ralf-Lists at ralfgross.de Mon Oct 18 09:22:07 2010 From: Ralf-Lists at ralfgross.de (Ralf Gross) Date: Mon, 18 Oct 2010 11:22:07 +0200 Subject: file open -> disk full -> save -> file 0 byte In-Reply-To: <20101009002827.0488c58b@gmx.de> References: <20101007124434.GP23326@pirx.askja.de> <20101007135209.GQ23326@pirx.askja.de> <20101008151041.6a5c7b04@gmx.de> <20101009002827.0488c58b@gmx.de> Message-ID: <20101018092206.GA7054@pirx.askja.de> Bodo Thiesen schrieb: > * Stephen Samuel hat geschrieben: > > > a slightly easier way of going through the indirect block... > > recovered=12 > > for i in `hexdump -e '4/4 "%10i " "\n"' /ramfs/restored.ind` ; do > > if [[ "$i" -ne 0 ]] ; then > > dd if=$DEV bs=$BS of=/ramfs/restored.ind skip=$i > > seek=$((recovered++)) count=1 > > fi > > done > > ;) > > > However, if the inode in question still exists, > > No it doesn't. Ralf used a tool called ext3grep which greps through the > journal to find old versions of the data in question. > > > then I'd be inclined to suggest that you mount the filesystem > > (readonly preferably), > > As to my knowledge, it is still impossible to mount an ext2 file system > with the needs_recovery flag read only with the ext3 driver and because > that flag is wrongly made "incompatible", it's even impossible to mount > it with the ext2 driver. Please do NEVER AGAIN suggest to anyone to mount > -o ro an ext2 filesystem having a journal if he has troubles with that file > system. Thank you both for your sugestions. The disk with the filesystem is not within reach anymore, so I can't try that. But I now know what to do next time :) Ralf From adilger.kernel at dilger.ca Tue Oct 19 05:28:55 2010 From: adilger.kernel at dilger.ca (Andreas Dilger) Date: Mon, 18 Oct 2010 23:28:55 -0600 Subject: file open -> disk full -> save -> file 0 byte In-Reply-To: <20101018092206.GA7054@pirx.askja.de> References: <20101007124434.GP23326@pirx.askja.de> <20101007135209.GQ23326@pirx.askja.de> <20101008151041.6a5c7b04@gmx.de> <20101009002827.0488c58b@gmx.de> <20101018092206.GA7054@pirx.askja.de> Message-ID: <5B9890C7-8DD1-4E08-BEA5-978758C7CAC1@dilger.ca> On 2010-10-18, at 03:22, Ralf Gross wrote: > Bodo Thiesen schrieb: >> As to my knowledge, it is still impossible to mount an ext2 file system >> with the needs_recovery flag read only with the ext3 driver and because >> that flag is wrongly made "incompatible", it's even impossible to mount >> it with the ext2 driver. Note that the needs_recovery flag was INTENTIONALLY made incompatible, not "wrongly" so. That is because with metadata being written into the journal, there is no guarantee that the filesystem is even consistent when mounted without journal replay. Metadata blocks can be reallocated as data blocks and overwritten by data, based only on changes committed to the journal, and this could result in errors. Cheers, Andreas From bothie at gmx.de Wed Oct 20 02:01:43 2010 From: bothie at gmx.de (Bodo Thiesen) Date: Wed, 20 Oct 2010 04:01:43 +0200 Subject: file open -> disk full -> save -> file 0 byte In-Reply-To: <5B9890C7-8DD1-4E08-BEA5-978758C7CAC1@dilger.ca> References: <20101007124434.GP23326@pirx.askja.de> <20101007135209.GQ23326@pirx.askja.de> <20101008151041.6a5c7b04@gmx.de> <20101009002827.0488c58b@gmx.de> <20101018092206.GA7054@pirx.askja.de> <5B9890C7-8DD1-4E08-BEA5-978758C7CAC1@dilger.ca> Message-ID: <20101020040143.167caecb@gmx.de> * Andreas Dilger hat geschrieben: >> Bodo Thiesen schrieb: >>> As to my knowledge, it is still impossible to mount an ext2 file system >>> with the needs_recovery flag read only with the ext3 driver and because >>> that flag is wrongly made "incompatible", it's even impossible to mount >>> it with the ext2 driver. > Note that the needs_recovery flag was INTENTIONALLY made incompatible, not > "wrongly" so. That is because with metadata being written into the > journal, there is no guarantee that the filesystem is even consistent when > mounted without journal replay. Metadata blocks can be reallocated as > data blocks and overwritten by data, based only on changes committed to > the journal, and this could result in errors. Right ... except ... what is the difference between an ext2 filesystem without journal which was not cleanly unmounted and one with journal which was not cleanly unmounted (except for the fact, that the latter one can be made consistent by replaying the journal in a few seconds). Especially: Why would it make a difference when mounting it -o ro -t ext2? Making errors intentionally is not really an excuse for doing so. Regards, Bodo From alex at alex.org.uk Sun Oct 31 10:12:41 2010 From: alex at alex.org.uk (Alex Bligh) Date: Sun, 31 Oct 2010 11:12:41 +0100 Subject: How to generate a large file allocating space Message-ID: <9A62FED22DF5F54862C68579@nimrod.local> I want to generate or extend a large file in an ext4 filesystem allocating space (i.e. not creating a sparse file) but not actually writing any data. I realise that this will result in the file containing the contents of whatever was there on the disk before, which is a possible security problem in some circumstances, but it isn't a problem here. Ideally what I'd like is a "make unsparse" bit of code. I'm happy for this to use the libraries, and work on an unmounted fs (indeed that is probably better). Supplementary question: can I assume that if a non-sparse file is on disk and never opened, and never unlinked, then the sectors used to to store that file's data will never change irrespective of other operations on the ext4 filesystem? IE nothing is shuffling where ext4 files are stored. -- Alex Bligh From bruno at wolff.to Sun Oct 31 15:23:51 2010 From: bruno at wolff.to (Bruno Wolff III) Date: Sun, 31 Oct 2010 10:23:51 -0500 Subject: How to generate a large file allocating space In-Reply-To: <9A62FED22DF5F54862C68579@nimrod.local> References: <9A62FED22DF5F54862C68579@nimrod.local> Message-ID: <20101031152351.GA20833@wolff.to> On Sun, Oct 31, 2010 at 11:12:41 +0100, Alex Bligh wrote: > I want to generate or extend a large file in an ext4 filesystem allocating > space (i.e. not creating a sparse file) but not actually writing any data. > I realise that this will result in the file containing the contents of > whatever was there on the disk before, which is a possible security problem > in some circumstances, but it isn't a problem here. There isn't going to be a way to do that through the file system, because as you note it is a security problem. What is the high level thing you are trying to accomplish here? Modifying the filesystem offline seems risky and maybe there is a safer way to accomplish your goals. > Supplementary question: can I assume that if a non-sparse file is on disk > and never opened, and never unlinked, then the sectors used to to store > that file's data will never change irrespective of other operations on the > ext4 filesystem? IE nothing is shuffling where ext4 files are stored. I think SSDs will move stuff around at a very low level. They would look like they are at the same place to stuff access the device like a disk, but physically would be stored in a different hardware location. With normal disks, you'd only see this if the device got a read error, but was able to successfully read a marginal sector and remap it to a spare sector. But again, stuff talking to the disk will see it at the same address. From alex at alex.org.uk Sun Oct 31 15:05:49 2010 From: alex at alex.org.uk (Alex Bligh) Date: Sun, 31 Oct 2010 16:05:49 +0100 Subject: How to generate a large file allocating space In-Reply-To: <20101031152351.GA20833@wolff.to> References: <9A62FED22DF5F54862C68579@nimrod.local> <20101031152351.GA20833@wolff.to> Message-ID: <2A382F5D94CB78493D1760C9@Ximines.local> --On 31 October 2010 10:23:51 -0500 Bruno Wolff III wrote: > On Sun, Oct 31, 2010 at 11:12:41 +0100, > Alex Bligh wrote: >> I want to generate or extend a large file in an ext4 filesystem >> allocating space (i.e. not creating a sparse file) but not actually >> writing any data. I realise that this will result in the file containing >> the contents of whatever was there on the disk before, which is a >> possible security problem in some circumstances, but it isn't a problem >> here. > > There isn't going to be a way to do that through the file system, because > as you note it is a security problem. > > What is the high level thing you are trying to accomplish here? Modifying > the filesystem offline seems risky and maybe there is a safer way to > accomplish your goals. I am trying to allocate huge files on ext4. I will then read the extents within the file and write to the disk at a block level rather than using ext4 (the FS will not be mounted at this point). This will allow me to have several iSCSI clients hitting the same LUN r/w safely. And at some point when I know the relevant iSCSI stuff has stopped and been flushed to disk, I may unlink the file. As I have total control of what's on the disk, I don't really care if previous content is exposed. If I write many Gigabyets of zeroes, that's going to take a long time, and be totally unnecessary, since I already have my own internal map of the data I will write into these huge files. Yes, I know this is deep scary voodoo, but that's ok. I can get the extent list the same way as "filefrag -v" gets it. What I can't currently work out (using either the library, or doing it with the volume mounted) is how to extend a file AND allocate the extents (as opposed to doing it sparse). >> Supplementary question: can I assume that if a non-sparse file is on disk >> and never opened, and never unlinked, then the sectors used to to store >> that file's data will never change irrespective of other operations on >> the ext4 filesystem? IE nothing is shuffling where ext4 files are stored. > > I think SSDs will move stuff around at a very low level. They would look > like they are at the same place to stuff access the device like a disk, > but physically would be stored in a different hardware location. > > With normal disks, you'd only see this if the device got a read error, but > was able to successfully read a marginal sector and remap it to a spare > sector. But again, stuff talking to the disk will see it at the same > address. Sure, that's no problem because the offset into the block device stays the same, even if physically the file is in a different place. So the extent list will stay the same for the file. -- Alex Bligh From mnalis-ml at voyager.hr Sun Oct 31 16:19:49 2010 From: mnalis-ml at voyager.hr (Matija Nalis) Date: Sun, 31 Oct 2010 17:19:49 +0100 Subject: How to generate a large file allocating space In-Reply-To: <9A62FED22DF5F54862C68579@nimrod.local> References: <9A62FED22DF5F54862C68579@nimrod.local> Message-ID: <20101031161949.GA3651@eagle102.home.lan> On Sun, Oct 31, 2010 at 11:12:41AM +0100, Alex Bligh wrote: > I want to generate or extend a large file in an ext4 filesystem allocating > space (i.e. not creating a sparse file) but not actually writing any data. Well, some metadata will have to be written, but not data. shouldn't posix_fallocate(3) and/or fallocate(2) do that? I haven't got EXT4 around ATM, but IIRC it should work on it too. On XFS it seems to work too: # time fallocate -l 3000000000 /stuff/tmp/bla fallocate -l 3000000000 /stuff/tmp/bla 0,00s user 0,00s system 0% cpu 0,402 total # du -h /stuff/tmp/bla 2,8G /stuff/tmp/bla # du -bh /stuff/tmp/bla 2,8G /stuff/tmp/bla # rm -f /stuff/tmp/bla fallocate(1) is from util-linux on my Debian Squeeze Oppose that to dramatically slower dd(1), which fills them with zeros explicitely: # time dd if=/dev/zero of=/stuff/tmp/bla count=30000 bs=100000 time dd if=/dev/zero of=/stuff/tmp/bla count=30000 bs=100000 30000+0 records in 30000+0 records out 3000000000 bytes (3,0 GB) copied, 31,2581 s, 96,0 MB/s dd if=/dev/zero of=/stuff/tmp/bla count=30000 bs=100000 0,00s user 3,41s system 10% cpu 31,341 total # du -h /stuff/tmp/bla 2,8G /stuff/tmp/bla -- Opinions above are GNU-copylefted. From alex at alex.org.uk Sun Oct 31 15:34:44 2010 From: alex at alex.org.uk (Alex Bligh) Date: Sun, 31 Oct 2010 16:34:44 +0100 Subject: How to generate a large file allocating space In-Reply-To: <20101031161949.GA3651@eagle102.home.lan> References: <9A62FED22DF5F54862C68579@nimrod.local> <20101031161949.GA3651@eagle102.home.lan> Message-ID: <97460430932965B63FE5AC45@Ximines.local> Matija, --On 31 October 2010 17:19:49 +0100 Matija Nalis wrote: > Well, some metadata will have to be written, but not data. > shouldn't posix_fallocate(3) and/or fallocate(2) do that? > > I haven't got EXT4 around ATM, but IIRC it should work on it too. > On XFS it seems to work too: That's /almost/ perfect: $ fallocate -l 1073741824 testfile $ filefrag -v testfile Filesystem type is: ef53 File size of testfile is 1073741824 (262144 blocks, blocksize 4096) ext logical physical expected length flags 0 0 14819328 30720 unwritten 1 30720 14850048 30720 unwritten 2 61440 14880768 30720 unwritten 3 92160 14911488 30720 unwritten 4 122880 14942208 2048 unwritten 5 124928 14946304 14944255 30720 unwritten 6 155648 14977024 30720 unwritten 7 186368 15007744 30720 unwritten 8 217088 15038464 30720 unwritten 9 247808 15069184 14336 unwritten,eof testfile: 2 extents found I think all I need do is clear the unwritten flag in each of the extents. Else I think if I read the file using ext4 later (i.e. after I've written directly to the sectors concerned) it will appear to be empty. Any idea how I do that? -- Alex Bligh From mnalis-ml at voyager.hr Sun Oct 31 18:46:09 2010 From: mnalis-ml at voyager.hr (Matija Nalis) Date: Sun, 31 Oct 2010 19:46:09 +0100 Subject: How to generate a large file allocating space In-Reply-To: <97460430932965B63FE5AC45@Ximines.local> References: <9A62FED22DF5F54862C68579@nimrod.local> <20101031161949.GA3651@eagle102.home.lan> <97460430932965B63FE5AC45@Ximines.local> Message-ID: <20101031184609.GA5712@eagle102.home.lan> On Sun, Oct 31, 2010 at 04:34:44PM +0100, Alex Bligh wrote: > That's /almost/ perfect: > > 9 247808 15069184 14336 unwritten,eof > testfile: 2 extents found > > I think all I need do is clear the unwritten flag in each of the extents. > Else I think if I read the file using ext4 later (i.e. after I've > written directly to the sectors concerned) it will appear to be > empty. Yes, it would appear empty. It is due to security concers others mentioned too. > Any idea how I do that? Sorry, I don't. debugfs(8) only appears to have read-only support for reading extents, and not for (re-)writing them, so I guess you'll have to find some function in libext2fs if it exists (or write your own if it doesn't) to use on unmounted fs. -- Opinions above are GNU-copylefted. From alex at alex.org.uk Sun Oct 31 18:09:26 2010 From: alex at alex.org.uk (Alex Bligh) Date: Sun, 31 Oct 2010 19:09:26 +0100 Subject: How to generate a large file allocating space In-Reply-To: <20101031184609.GA5712@eagle102.home.lan> References: <9A62FED22DF5F54862C68579@nimrod.local> <20101031161949.GA3651@eagle102.home.lan> <97460430932965B63FE5AC45@Ximines.local> <20101031184609.GA5712@eagle102.home.lan> Message-ID: --On 31 October 2010 19:46:09 +0100 Matija Nalis wrote: > Sorry, I don't. debugfs(8) only appears to have read-only support for > reading extents, and not for (re-)writing them, so I guess you'll have to > find some function in libext2fs if it exists (or write your own if it > doesn't) to use on unmounted fs. Yes. I need to iterate through the extents. debugfs does that but I don't know how to change the flag or (more relevantly) whether it is safe to do so. -- Alex Bligh