This URL you posted seems to indicate this is a RHEL 6.1-specific problem.
Have you tried a more recent RHEL6 version? We boot just fine off 10 GbE-capable ethernet cards (X520s or X540s). We image to RHEL 6.3 and 6.5.
The *only* quirkness I've seen w/ this ixgbe / Intel 10 GbE NIC combo is if the switch port is configured for 1 GbE. Then the card seems to take a while to auto-neg down to 1 GbE. So we have to put a link delay in the ifcfg-eth* file. And all is good. I believe the syntax is: LINKDELAY=10 or some such.
We never experienced this problem when kickstarting. But even if we had, I believe there's equiv syntax in the ks.cfg to do this.
If the switch port is set to 10 GbE, I've seen no quirkness on the RHEL 6.3 or 6.5 ixgbe driver. And we've imaged hundred of servers at this point to RHEL 6.3 or 6.5, all with Intel NICs.
Long ago, I had a similar problem on SLES 10, I had to crack open the boot media and put in a more recent raid controller driver. That was when vmlinuz was in the old format; I wouldn't know how to do that now. And even back then, it was quite painful.
That's why popping up your RHEL 6.x version seems to be an easier fix.