Fw: Re: Fw: medley raid problem

Richard Powell rspowell at gmail.com
Thu Nov 3 22:31:47 UTC 2005


Hi again everyone,

Could someone let me know exactly how I go about getting the metadate off of
my drive?

I finally managed to create my own customized amd64 dmraid (rc9) livecd to
allow me to extract data from my raid, now I just need to know what commands
etc. I need to issue to get the metadata so I can send it to Heinz.

By the way, I'm using a syba fakeraid card with the siI0680 chipset onboard
and 2x seagate 120gb 8mb cache 7200rpm drives. For this reason I tend to
agree with Molle's statement that the silraid detection fails on some newer
drives as well.

Glad to see work is being done on silraid, I was really getting worried that
I would never get it to work!

Kind Regards,
--
Richard Powell (richard at povs.us)

On 11/3/05, Heinz Mauelshagen <mauelshagen at redhat.com> wrote:
>
>
> James,
>
> it works but it breaks my design goal to avoid hw ties in dmraid code :(
>
> Can you send me your metadata dd'ed into files hd[efg].dat, so that I can
> try
> figure another PCI* dependency avoiding approach ?
>
> Thanks,
> Heinz
>
> On Sun, Oct 30, 2005 at 01:15:58AM -0500, James Olson wrote:
> > Hi Molle,
> > Thanks for your help. Anyway, the hpa message turned out to be a red
> herring. The problem was actually in the detection of the magic number in
> the metadata. From sil.c the magic number is defined:
> > #define SIL_MAGIC 0x2F000000
> > and on my 3 drives I read:
> > magic: hdf 0x0F000000 disk_number: 2
> > magic: hdg 0x2F000000 disk_number: 1
> > magic: hde 0x0B000000 disk_number: 0
> >
> > so it is finding the metadata but the routine:
> > return sil->magic == SIL_MAGIC && sil->disk_number < 8;
> > fails for hdf and hde.
> >
> > I wasn't sure how to fix the magic number detection so I wrote a patch
> to use the PCI vendor and product ID instead, like they do in the working
> ataraid, medley 2.4 kernel modules, and it works! :
> >
> > --- dmraid.orig/dmraid/1.0.0.rc9/lib/format/ataraid/sil.c 2005-09-22
> 06:09:02.000000000 -0700
> > +++ dmraid/1.0.0.rc9/lib/format/ataraid/sil.c 2005-10-29 22:50:
> 32.000000000 -0700
> > @@ -128,7 +128,11 @@
> >
> > static int is_sil(struct sil *sil)
> > {
> > - return sil->magic == SIL_MAGIC && sil->disk_number < 8;
> > +#define SIL_680_VENDOR_ID 0x1095
> > +#define SIL_680_PRODUCT_ID 0x0680
> > + if ((sil->vendor_id == SIL_680_VENDOR_ID) && (sil->product_id ==
> SIL_680_PRODUCT_ID))
> > + return 1;
> > + else return 0;
> > }
> >
> > static int sil_valid(struct lib_context *lc, struct dev_info *di,
> >
> > ----- Original Message -----
> > From: "Molle Bestefich" <molle.bestefich at gmail.com>
> > To: "ATARAID (eg, Promise Fasttrak, Highpoint 370) related discussions"
> <ataraid-list at redhat.com>
> > Subject: Re: Fw: Re: Fw: medley raid problem
> > Date: Sat, 29 Oct 2005 22:54:28 +0200
> >
> > >
> > > James Olson wrote:
> > > > 2.4 kernel dmesg log:
> > > > hde: 6835952 sectors (3500 MB), CHS=6781/16/63, UDMA(33)
> > > > hdf: 6346368 sectors (3249 MB) w/96KiB Cache, CHS=6296/16/63, DMA
> > > > hdg: host protected area => 1
> > > > hdg: 6346368 sectors (3249 MB) w/256KiB Cache, CHS=6296/16/63,
> UDMA(33)
> > >
> > > Too bad it doesn't tell you how large the HPA is, otherwise you could
> > > subtract that from the total size and look for metadata at the
> > > resulting sector...
> > >
> > > > Here is the 2.6 log:
> > > > hde: FUJITSU MPA3035ATU, ATA DISK drive
> > > > hdf: IBM-DAQA-33240, ATA DISK drive
> > > > hdg: WDC AC23200L, ATA DISK drive
> > > > hde: 6835952 sectors (3500 MB), CHS=6781/16/63, UDMA(33)
> > > > hdf: 6346368 sectors (3249 MB) w/96KiB Cache, CHS=6296/16/63, DMA
> > > > hdg: 6346368 sectors (3249 MB) w/256KiB Cache, CHS=6296/16/63,
> UDMA(33)
> > >
> > > I've seen 2.6 versions that do tell about HPA areas, I find it suspect
> > > that yours doesn't.
> > > Perhaps it's a bug (but it really does disable the HPA), perhaps there
> > > really is no HPA or perhaps the information has been intentionally
> > > removed from newer 2.6 kernels.
> > >
> > > Could you compare the number of sectors that Linux tells you above to
> > > the number reported by your BIOS?
> > >
> > > Just to get you up to speed on HPA:
> > > There was some discussion on linux-kernel, and the powers that be
> > > (Alan Cox) does not want to change the kernel's current faulty
> > > behaviour of automatically disabling the HPA, since that would break
> > > things for the few people that has partitions which span the HPA if
> > > they upgrade the kernel and do not manually enable the "HPA disable"
> > > thing. Most of those people supposedly have BIOSs where they can
> > > disable HPA the proper way (the ThinkPad users do, at least), but that
> > > argument didn't seem to stick.
> > >
> > > The only way forward right now that would pass the Cox barrier is a
> > > patch that change all the partition detection code to:
> > > * Know about HPA/non-HPA size of physical devices.
> > > * Detect partitions that are out-of-bounds of the non-HPA size.
> > > * Call IDE code to disable HPA when such partitions are found.
> > >
> > > Or patches that:
> > > * Change all user code to be aware of HPA versus non-HPA IDE disks.
> > >
> > > Both seems overly complex solutions compared to:
> > > * Fixing the kernel to behave itself and not disable HPA.
> > > * Provide a kernel command line flag or configuration option for
> > > those who need to disable HPA manually because they have a horrible
> > > BIOS.
> > >
> > > But that's how things are right now.
> > >
> > > > Also there is some question of the drive geometry between the
> different kernels.
> > > > The hdparm -g outputs differ.
> > >
> > > Maybe 2.4 does not disable HPA?
> > > 2.6 does (my best guess is that it does so because some IDE developer
> > > had a laptop which didn't allow him to disable HPA).
> > >
> > > Seems odd though, since the two logs you've given indicate that both
> > > kernels see the same number of sectors. What sector count do you get
> > > if you run fdisk on the two kernels?
> > >
> > > Blah blah. Sorry for the lengthy mail.
> > >
> > > _______________________________________________
> > > Ataraid-list mailing list
> > > Ataraid-list at redhat.com
> > > https://www.redhat.com/mailman/listinfo/ataraid-list
> >
> >
> > --
> > _______________________________________________
> >
> > Search for businesses by name, location, or phone number. -Lycos Yellow
> Pages
> >
> >
> http://r.lycos.com/r/yp_emailfooter/http://yellowpages.lycos.com/default.asp?SRC=lycos10
> >
> >
> > _______________________________________________
> > Ataraid-list mailing list
> > Ataraid-list at redhat.com
> > https://www.redhat.com/mailman/listinfo/ataraid-list
>
>
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
>
> Heinz Mauelshagen Red Hat GmbH
> Consulting Development Engineer Am Sonnenhang 11
> Cluster and Storage Development 56242 Marienrachdorf
> Germany
> Mauelshagen at RedHat.com +49 2626 141200
> FAX 924446
>
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
>
> _______________________________________________
> Ataraid-list mailing list
> Ataraid-list at redhat.com
> https://www.redhat.com/mailman/listinfo/ataraid-list
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/ataraid-list/attachments/20051103/b4d77079/attachment.htm>


More information about the Ataraid-list mailing list