Software Raid0 problem with 5TB
Max Kipness
max at kipness.com
Wed Mar 10 18:13:34 UTC 2004
Hello,
I've got a Fedora Core 1 w/2.6.3 kernel with 3 3Ware 8-port cards, with
8 250gb drives each in HW Raid 5 containers. My goal was then to use
software Raid0 to create one volume out of the 3. So far the volume is
not showing the correct total. Here are the steps I've taken.
Oh, my boot drives are also two 80Gb drives using a software Raid1,
which is working fine and I don't thing would have anything to do with
the issues I'm having.
1) Created partitions with fdisk and changed ID to fd on all 3
containers. Doing an fdisk -l gives me the following results which
appear to be correct:
Disk /dev/sda: 1756.9 GB, 1756994011136 bytes
255 heads, 63 sectors/track, 213609 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 213609 1715814261 fd Linux raid autodetect
Disk /dev/sdb: 1756.9 GB, 1756994011136 bytes
255 heads, 63 sectors/track, 213609 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 213609 1715814261 fd Linux raid autodetect
Disk /dev/sdc: 1756.9 GB, 1756994011136 bytes
255 heads, 63 sectors/track, 213609 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
2) Configured /etc/raidtab as follows and then issued the command:
mkraid /dev/md1, which ran successfully.
raiddev /dev/md1
raid-level 0
nr-raid-disks 3
chunk-size 32
persistent-superblock 1
nr-spare-disks 0
device /dev/sda1
raid-disk 0
device /dev/sdb1
raid-disk 1
device /dev/sdc1
raid-disk 2
3) Mounted md1 with ext3 successfully
But when doing an df -h, I get the following bizarre results:
Filesystem Size Used Avail Use% Mounted on
/dev/md0 75G 1.4G 69G 2% /
none 1008M 0 1008M 0% /dev/shm
/dev/md1 801G 33M 760G 1% /data
I have no idea where it is getting 801 GB instead of something like 5.1
TB.
To experiment, I used only two of the 1.7TB partitions, and this seemed
to work fine. Here are the results of df -h:
Filesystem Size Used Avail Use% Mounted on
/dev/md0 75G 1.9G 69G 3% /
none 1013M 0 1013M 0% /dev/shm
/dev/md1 3.2T 33M 3.0T 1% /data
This tells me that large block device support is working. But why won't
it allow the 3rd 1.7Tb partition? Am I hitting a limit with Fedora? With
Raidtools?
Any help would be much appreciated.
Max
More information about the fedora-list
mailing list