> I was able to put a FireGL 2 (IBM RC1000 based card) in my linux box last week,
> download firegl's binary driver (which came with an open source kernel module, but
> a single .a file containing the intellectual property). After a bit of
> frustration, we were able to get the driver installed and fired up our app
> (www.cgl.ucsf.edu/chimera). Wow.
> It blew away my GeForce2, it blew away anything I've ever seen on a PC (but not our
> 3-year-old, $100,000 Infinite Reality). This is a $1000 card. What really sold me
> on it is the support for 3D textures. Games right now don't
> suppport 3D textures, but we use them for scientific rendering, and it's a very
> useful feature. Even the existent Geforce's only support 3D textures in software,
> although newer models will support them in hardware.
> Software 3D textures are far too slow to be usable.
Went and had a look at 3DLabs' site... wow, that Wildcat II 5110 probably
has more processing power than the PC you put it in. Looks like it'll be
expensive, and Win2K support only. Oh well.
Well, the Wildcat 5110 is a different card all together from the FireGL 2. It is an
OEM-only, and SGI will be including one in a PC machine. I believe they support both
Linux and Win2k, it will be interesting to see if they ship a Linux machine that supports
the Wildcat. SGI, of course, ships their machines with Red Hat, but "tuned" to take card of the
> Let's be perfectly frank here. Neither ATI nor Matrox has released their crown
> jewels. In the case of Matrox, we on the utah-glx mailing list really wanted
> Matrox to release the specifications for a very useful part of their card- the
> microcode for their setup engine. They refused. They gave us a binary microcode
> and enough information to load the ucode onto the card, but never the spec.
> There's no way we could ever come up with a driver that was as fast as the windows
> driver, because the windows driver developers had access to the microcode specs and
> could re-write the microcode to get a faster setup engine.
> So, I guess that eliminates Matrox.
Hmmm... but Mike said he got all the specs and documentation?
I doubt he got "all" the specs. I'm guessing he got enough specs to write a 2D driver and a 3D driver, but I'm sure he didn't get the actually microcode programming language specs. Last I heard, the microcode specs were not released to the public. You have recognize that there are "levels" of specs, from the software layer to
the hardware layer. Typically a company would never release *all* their specs, such as the low-level
hardware specs, because that would be effectively handing cloners the blueprints. At the very worst, 2D driver writers need enough information on how to use the card as a standard VGA, or a dumb framebuffer. Better yet is 2D acceleration. Basic 3D on top of 2D acceleration was what we've had for some time-- the PC CPU is still doing much of the 3D pipeline. The holy grail, is knowing the right information so you can take opengl calls directly,
and pushing it into the hardwith with a minimum of PC CPU manipulation ("opengl on a chip"). There is still
some setup to be done between the OpenGL layer and the 3D chip hardware, but very little actual work has to be done- mostly formatting command/data buffers for DMAs and then returning as quickly as possible. There is typically a one-to-one mapping between OpenGL function and hardware command.
In the case of the Matrox board, there is an "engine" on board that
you can download microcode to. This lets
you change the format of the command/data buffers so that the setup load on the PC cpu is lessened depending on the capabilities of the PC. how to write your own microcode, if I am completely up to do, is still under wraps
at Matrox. They may have changed this recently, but I don't believe so.