rpms/kernel/F-12 linux-2.6-intel-iommu-updates.patch, 1.1, 1.2 kernel.spec, 1.1842, 1.1843

David Woodhouse dwmw2 at fedoraproject.org
Wed Sep 30 19:12:05 UTC 2009


Author: dwmw2

Update of /cvs/pkgs/rpms/kernel/F-12
In directory cvs1.fedora.phx.redhat.com:/tmp/cvs-serv14162

Modified Files:
	linux-2.6-intel-iommu-updates.patch kernel.spec 
Log Message:
Work around more BIOS braindamage killing iommu

linux-2.6-intel-iommu-updates.patch:
 Documentation/Intel-IOMMU.txt  |    6 
 arch/ia64/kernel/pci-swiotlb.c |    2 
 arch/x86/Kconfig               |    2 
 arch/x86/kernel/pci-swiotlb.c  |    5 
 drivers/pci/dmar.c             |   50 ++++-
 drivers/pci/intel-iommu.c      |  399 ++++++++++++++++++++++++-----------------
 drivers/pci/intr_remapping.c   |    8 
 drivers/pci/iova.c             |   16 -
 include/linux/intel-iommu.h    |    2 
 include/linux/iova.h           |    1 
 10 files changed, 295 insertions(+), 196 deletions(-)

Index: linux-2.6-intel-iommu-updates.patch
===================================================================
RCS file: /cvs/pkgs/rpms/kernel/F-12/linux-2.6-intel-iommu-updates.patch,v
retrieving revision 1.1
retrieving revision 1.2
diff -u -p -r1.1 -r1.2
--- linux-2.6-intel-iommu-updates.patch	10 Aug 2009 14:21:09 -0000	1.1
+++ linux-2.6-intel-iommu-updates.patch	30 Sep 2009 19:12:03 -0000	1.2
@@ -1,3 +1,310 @@
+commit e0fc7e0b4b5e69616f10a894ab9afff3c64be74e
+Author: David Woodhouse <David.Woodhouse at intel.com>
+Date:   Wed Sep 30 09:12:17 2009 -0700
+
+    intel-iommu: Yet another BIOS workaround: Isoch DMAR unit with no TLB space
+    
+    Asus decided to ship a BIOS which configures sound DMA to go via the
+    dedicated IOMMU unit, but assigns precisely zero TLB entries to that
+    unit. Which causes the whole thing to deadlock, including the DMA
+    traffic on the _other_ IOMMU units. Nice one.
+    
+    Signed-off-by: David Woodhouse <David.Woodhouse at intel.com>
+
+commit 17b6097753e926ca546189463070a7e94e7ea9fa
+Author: Roland Dreier <rdreier at cisco.com>
+Date:   Thu Sep 24 12:14:00 2009 -0700
+
+    intel-iommu: Decode (and ignore) RHSA entries
+    
+    I recently got a system where the DMAR table included a couple of RHSA
+    (remapping hardware static affinity) entries.  Rather than printing a
+    message about an "Unknown DMAR structure," it would probably be more
+    useful to dump the RHSA structure (as other DMAR structures are dumped).
+    
+    Signed-off-by: Roland Dreier <rolandd at cisco.com>
+    Signed-off-by: David Woodhouse <David.Woodhouse at intel.com>
+
+commit 4de75cf9391b538bbfe7dc0a9782f1ebe8e242ad
+Author: Roland Dreier <rdreier at cisco.com>
+Date:   Thu Sep 24 01:01:29 2009 +0100
+
+    intel-iommu: Make "Unknown DMAR structure" message more informative
+    
+    We might as well print the type of the DMAR structure we don't know how
+    to handle when skipping it.  Then someone getting this message has a
+    chance of telling whether the structure is just bogus, or if there
+    really is something valid that the kernel doesn't know how to handle.
+    
+    Signed-off-by: Roland Dreier <rolandd at cisco.com>
+    Signed-off-by: David Woodhouse <David.Woodhouse at intel.com>
+
+commit b09a75fc5e77b7c58d097236f89b1ff72dcdb562
+Merge: cf63ff5 b94996c
+Author: Linus Torvalds <torvalds at linux-foundation.org>
+Date:   Wed Sep 23 10:06:10 2009 -0700
+
+    Merge git://git.infradead.org/iommu-2.6
+    
+    * git://git.infradead.org/iommu-2.6: (23 commits)
+      intel-iommu: Disable PMRs after we enable translation, not before
+      intel-iommu: Kill DMAR_BROKEN_GFX_WA option.
+      intel-iommu: Fix integer wrap on 32 bit kernels
+      intel-iommu: Fix integer overflow in dma_pte_{clear_range,free_pagetable}()
+      intel-iommu: Limit DOMAIN_MAX_PFN to fit in an 'unsigned long'
+      intel-iommu: Fix kernel hang if interrupt remapping disabled in BIOS
+      intel-iommu: Disallow interrupt remapping if not all ioapics covered
+      intel-iommu: include linux/dmi.h to use dmi_ routines
+      pci/dmar: correct off-by-one error in dmar_fault()
+      intel-iommu: Cope with yet another BIOS screwup causing crashes
+      intel-iommu: iommu init error path bug fixes
+      intel-iommu: Mark functions with __init
+      USB: Work around BIOS bugs by quiescing USB controllers earlier
+      ia64: IOMMU passthrough mode shouldn't trigger swiotlb init
+      intel-iommu: make domain_add_dev_info() call domain_context_mapping()
+      intel-iommu: Unify hardware and software passthrough support
+      intel-iommu: Cope with broken HP DC7900 BIOS
+      iommu=pt is a valid early param
+      intel-iommu: double kfree()
+      intel-iommu: Kill pointless intel_unmap_single() function
+      ...
+    
+    Fixed up trivial include lines conflict in drivers/pci/intel-iommu.c
+
+commit b94996c99c8befed9cbbb8804a4625e203913318
+Author: David Woodhouse <David.Woodhouse at intel.com>
+Date:   Sat Sep 19 15:28:12 2009 -0700
+
+    intel-iommu: Disable PMRs after we enable translation, not before
+    
+    Signed-off-by: David Woodhouse <David.Woodhouse at intel.com>
+
+commit 0c02a20ff7695f9c54cc7c013dda326270ccdac8
+Author: David Woodhouse <David.Woodhouse at intel.com>
+Date:   Sat Sep 19 09:37:23 2009 -0700
+
+    intel-iommu: Kill DMAR_BROKEN_GFX_WA option.
+    
+    Just make it depend on BROKEN for now, in case people scream really loud
+    about it (and because we might want to keep some of this logic for an
+    upcoming BIOS workaround, so I don't just want to rip it out entirely
+    just yet). But for graphics devices, it really ought to be unnecessary.
+    
+    Signed-off-by: David Woodhouse <David.Woodhouse at intel.com>
+
+commit 64de5af000e99f32dd49ff5dd9a0fd7db1f60305
+Author: Benjamin LaHaise <ben.lahaise at neterion.com>
+Date:   Wed Sep 16 21:05:55 2009 -0400
+
+    intel-iommu: Fix integer wrap on 32 bit kernels
+    
+    The following 64 bit promotions are necessary to handle memory above the
+    4GiB boundary correctly.
+    
+    [dwmw2: Fix the second part not to need 64-bit arithmetic at all]
+    
+    Signed-off-by: Benjamin LaHaise <ben.lahaise at neterion.com>
+    Signed-off-by: David Woodhouse <David.Woodhouse at intel.com>
+
+commit 59c36286b74ae6a8adebf6e133a83d7f2e3e6704
+Author: David Woodhouse <David.Woodhouse at intel.com>
+Date:   Sat Sep 19 07:36:28 2009 -0700
+
+    intel-iommu: Fix integer overflow in dma_pte_{clear_range,free_pagetable}()
+    
+    If end_pfn is equal to (unsigned long)-1, then the loop will never end.
+    
+    Seen on 32-bit kernel, but could have happened on 64-bit too once we get
+    hardware that supports 64-bit guest addresses.
+    
+    Change both functions to a 'do {} while' loop with the test at the end,
+    and check for the PFN having wrapper round to zero.
+    
+    Reported-by: Benjamin LaHaise <ben.lahaise at neterion.com>
+    Tested-by: Benjamin LaHaise <ben.lahaise at neterion.com>
+    Signed-off-by: David Woodhouse <David.Woodhouse at intel.com>
+
+commit 2ebe31513fcbe7a781f27002f065b50ae195022f
+Author: David Woodhouse <David.Woodhouse at intel.com>
+Date:   Sat Sep 19 07:34:04 2009 -0700
+
+    intel-iommu: Limit DOMAIN_MAX_PFN to fit in an 'unsigned long'
+    
+    This means we're limited to 44-bit addresses on 32-bit kernels, and
+    makes it sane for us to use 'unsigned long' for PFNs throughout.
+    
+    Which is just as well, really, since we already do that.
+    
+    Reported-by: Benjamin LaHaise <ben.lahaise at neterion.com>
+    Tested-by: Benjamin LaHaise <ben.lahaise at neterion.com>
+    Signed-off-by: David Woodhouse <David.Woodhouse at intel.com>
+
+commit 074835f0143b83845af5044af2739c52c9f53808
+Author: Youquan Song <youquan.song at intel.com>
+Date:   Wed Sep 9 12:05:39 2009 -0400
+
+    intel-iommu: Fix kernel hang if interrupt remapping disabled in BIOS
+    
+    BIOS clear DMAR table INTR_REMAP flag to disable interrupt remapping. Current
+    kernel only check interrupt remapping(IR) flag in DRHD's extended capability
+    register to decide interrupt remapping support or not. But IR flag will not
+    change when BIOS disable/enable interrupt remapping.
+    
+    When user disable interrupt remapping in BIOS or BIOS often defaultly disable
+    interrupt remapping feature when BIOS is not mature.Though BIOS disable
+    interrupt remapping but intr_remapping_supported function will always report
+    to OS support interrupt remapping if VT-d2 chipset populated. On this
+    cases, kernel will continue enable interrupt remapping and result kernel panic.
+    This bug exist on almost all platforms with interrupt remapping support.
+    
+    This patch add DMAR table INTR_REMAP flag check before enable interrupt
+    remapping.
+    
+    Signed-off-by: Youquan Song <youquan.song at intel.com>
+    Signed-off-by: David Woodhouse <David.Woodhouse at intel.com>
+
+commit e936d0773df172ec8600777fdd72bbc1f75f22ad
+Author: Youquan Song <youquan.song at intel.com>
+Date:   Mon Sep 7 10:58:07 2009 -0400
+
+    intel-iommu: Disallow interrupt remapping if not all ioapics covered
+    
+    Current kernel enable interrupt remapping only when all the vt-d unit support
+    interrupt remapping. So it is reasonable we should also disallow enabling
+    intr-remapping if there any io-apics that are not listed under vt-d units.
+    Otherwise we can run into issues.
+    
+    Acked-by: Suresh Siddha <suresh.b.siddha at intel.com>
+    Signed-off-by: Youquan Song <youquan.song at intel.com>
+    Signed-off-by: David Woodhouse <David.Woodhouse at intel.com>
+
+commit adb2fe0277607d50f4e9ef06e1d180051a609c25
+Author: Stephen Rothwell <sfr at canb.auug.org.au>
+Date:   Mon Aug 31 15:24:23 2009 +1000
+
+    intel-iommu: include linux/dmi.h to use dmi_ routines
+    
+    This file needs to include linux/dmi.h directly rather than relying on
+    it being pulled in from elsewhere.
+    
+    Signed-off-by: Stephen Rothwell <sfr at canb.auug.org.au>
+    Signed-off-by: David Woodhouse <David.Woodhouse at intel.com>
+
+commit 8211a7b5857914058c52ae977c96463e419b37ab
+Author: Troy Heber <troy.heber at hp.com>
+Date:   Wed Aug 19 15:26:11 2009 -0600
+
+    pci/dmar: correct off-by-one error in dmar_fault()
+    
+    DMAR faults are recorded into a ring of "fault recording registers".
+    fault_index is a 0-based index into the ring. The code allows the
+    0-based fault_index to be equal to the total number of fault registers
+    available from the cap_num_fault_regs() macro, which causes access
+    beyond the last available register.
+    
+    Signed-off-by Troy Heber <troy.heber at hp.com>
+    Signed-off-by: David Woodhouse <David.Woodhouse at intel.com>
+
+commit 2ff729f5445cc47d1910386c36e53fc6b1c5e47a
+Author: David Woodhouse <David.Woodhouse at intel.com>
+Date:   Wed Aug 26 14:25:41 2009 +0100
+
+    intel-iommu: Cope with yet another BIOS screwup causing crashes
+    
+    Signed-off-by: David Woodhouse <David.Woodhouse at intel.com>
+
+commit 94a91b5051a77d8a71d4f11a3240f0d9c51b6cf2
+Author: Donald Dutile <ddutile at redhat.com>
+Date:   Thu Aug 20 16:51:34 2009 -0400
+
+    intel-iommu: iommu init error path bug fixes
+    
+    The kcalloc() failure path in iommu_init_domains() calls
+    free_dmar_iommu(), which assumes that ->domains, ->domain_ids,
+    and ->lock have been properly initialized.
+    
+    Add checks in free_[dmar]_iommu to not use ->domains,->domain_ids
+    if not alloced. Move the lock init to prior to the kcalloc()'s,
+    so it is valid in free_context_table() when free_dmar_iommu() invokes
+    it at the end.
+    
+    Patch based on iommu-2.6,
+    commit 132032274a594ee9ffb6b9c9e2e9698149a09ea9
+    
+    Signed-off-by: Donald Dutile <ddutile at redhat.com>
+    Signed-off-by: David Woodhouse <David.Woodhouse at intel.com>
+
+commit 071e13746f9ebb259987c71ea77f11e7656769a2
+Author: Matt Kraai <kraai at ftbfs.org>
+Date:   Sun Aug 23 22:30:22 2009 -0700
+
+    intel-iommu: Mark functions with __init
+    
+    Mark si_domain_init and iommu_prepare_static_identity_mapping with
+    __init, to eliminate the following warnings:
+    
+    WARNING: drivers/pci/built-in.o(.text+0xf1f4): Section mismatch in reference from the function si_domain_init() to the function .init.text:si_domain_work_fn()
+    The function si_domain_init() references
+    the function __init si_domain_work_fn().
+    This is often because si_domain_init lacks a __init
+    annotation or the annotation of si_domain_work_fn is wrong.
+    
+    WARNING: drivers/pci/built-in.o(.text+0xe340): Section mismatch in reference from the function iommu_prepare_static_identity_mapping() to the function .init.text:si_domain_init()
+    The function iommu_prepare_static_identity_mapping() references
+    the function __init si_domain_init().
+    This is often because iommu_prepare_static_identity_mapping lacks a __init
+    annotation or the annotation of si_domain_init is wrong.
+    
+    Signed-off-by: Matt Kraai <kraai at ftbfs.org>
+    Signed-off-by: David Woodhouse <David.Woodhouse at intel.com>
+
+commit 132032274a594ee9ffb6b9c9e2e9698149a09ea9
+Author: David Woodhouse <dwmw2 at infradead.org>
+Date:   Mon Aug 3 12:40:27 2009 +0100
+
+    USB: Work around BIOS bugs by quiescing USB controllers earlier
+    
+    We are seeing a number of crashes in SMM, when VT-d is enabled while
+    'Legacy USB support' is enabled in various BIOSes.
+    
+    The BIOS is supposed to indicate which addresses it uses for DMA in a
+    special ACPI table ("RMRR"), so that we can punch a hole for it when we
+    set up the IOMMU.
+    
+    The problem is, as usual, that BIOS engineers are totally incompetent.
+    They write code which will crash if the DMA goes AWOL, and then they
+    either neglect to provide an RMRR table at all, or they put the wrong
+    addresses in it. And of course they don't do _any_ QA, since that would
+    take too much time away from their crack-smoking habit.
+    
+    The real fix, of course, is for consumers to refuse to buy motherboards
+    which only have closed-source firmware available. If we had _open_
+    firmware, bugs like this would be easy to fix.
+    
+    Since that's something I can only dream about, this patch implements an
+    alternative -- ensuring that the USB controllers are handed off from the
+    BIOS and quiesced _before_ the IOMMU is initialised. That would have
+    been a much better design than this RMRR nonsense in the first place, of
+    course. The bootloader has no business doing DMA after the OS has booted
+    anyway.
+    
+    Signed-off-by: David Woodhouse <David.Woodhouse at intel.com>
+    Signed-off-by: Greg Kroah-Hartman <gregkh at suse.de>
+
+commit ba6c548701ef7a93b9ea05d1506d2b62f1628333
+Author: David Woodhouse <David.Woodhouse at intel.com>
+Date:   Thu Aug 13 18:18:00 2009 +0100
+
+    ia64: IOMMU passthrough mode shouldn't trigger swiotlb init
+    
+    Since commit 19943b0e30b05d42e494ae6fef78156ebc8c637e ('intel-iommu:
+    Unify hardware and software passthrough support'), hardware passthrough
+    mode will do the same as software passthrough mode was doing -- it'll
+    still use the IOMMU normally for devices which can't address all of
+    memory. This means that we don't need to bother with swiotlb.
+    
+    Signed-off-by: David Woodhouse <David.Woodhouse at intel.com>
+
 commit 5fe60f4e5871b64e687229199fafd4ef13cd0886
 Author: David Woodhouse <David.Woodhouse at intel.com>
 Date:   Sun Aug 9 10:53:41 2009 +0100
@@ -124,22 +431,49 @@ Date:   Tue Jul 7 19:43:20 2009 +0100
     cycles to 4812 cycles on my Lenovo x200s test box -- a modest 20%.
     
     Signed-off-by: David Woodhouse <David.Woodhouse at intel.com>
-diff --git a/arch/x86/kernel/pci-dma.c b/arch/x86/kernel/pci-dma.c
-index 1a041bc..ae13e34 100644
---- a/arch/x86/kernel/pci-dma.c
-+++ b/arch/x86/kernel/pci-dma.c
-@@ -212,10 +212,8 @@ static __init int iommu_setup(char *p)
- 		if (!strncmp(p, "soft", 4))
- 			swiotlb = 1;
- #endif
--		if (!strncmp(p, "pt", 2)) {
-+		if (!strncmp(p, "pt", 2))
- 			iommu_pass_through = 1;
--			return 1;
--		}
- 
- 		gart_parse_options(p);
+diff --git a/Documentation/Intel-IOMMU.txt b/Documentation/Intel-IOMMU.txt
+index 21bc416..cf9431d 100644
+--- a/Documentation/Intel-IOMMU.txt
++++ b/Documentation/Intel-IOMMU.txt
+@@ -56,11 +56,7 @@ Graphics Problems?
+ ------------------
+ If you encounter issues with graphics devices, you can try adding
+ option intel_iommu=igfx_off to turn off the integrated graphics engine.
+-
+-If it happens to be a PCI device included in the INCLUDE_ALL Engine,
+-then try enabling CONFIG_DMAR_GFX_WA to setup a 1-1 map. We hear
+-graphics drivers may be in process of using DMA api's in the near
+-future and at that time this option can be yanked out.
++If this fixes anything, please ensure you file a bug reporting the problem.
+ 
+ Some exceptions to IOVA
+ -----------------------
+diff --git a/arch/ia64/kernel/pci-swiotlb.c b/arch/ia64/kernel/pci-swiotlb.c
+index 223abb1..285aae8 100644
+--- a/arch/ia64/kernel/pci-swiotlb.c
++++ b/arch/ia64/kernel/pci-swiotlb.c
+@@ -46,7 +46,7 @@ void __init swiotlb_dma_init(void)
  
+ void __init pci_swiotlb_init(void)
+ {
+-	if (!iommu_detected || iommu_pass_through) {
++	if (!iommu_detected) {
+ #ifdef CONFIG_IA64_GENERIC
+ 		swiotlb = 1;
+ 		printk(KERN_INFO "PCI-DMA: Re-initialize machine vector.\n");
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 13ffa5d..5499da1 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1916,7 +1916,7 @@ config DMAR_DEFAULT_ON
+ config DMAR_BROKEN_GFX_WA
+ 	def_bool n
+ 	prompt "Workaround broken graphics drivers (going away soon)"
+-	depends on DMAR
++	depends on DMAR && BROKEN
+ 	---help---
+ 	  Current Graphics drivers tend to use physical address
+ 	  for DMA and avoid using DMA APIs. Setting this config
 diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c
 index 6af96ee..1e66b18 100644
 --- a/arch/x86/kernel/pci-swiotlb.c
@@ -157,10 +491,56 @@ index 6af96ee..1e66b18 100644
  	if (swiotlb_force)
  		swiotlb = 1;
 diff --git a/drivers/pci/dmar.c b/drivers/pci/dmar.c
-index 7b287cb..380b60e 100644
+index 7b287cb..708176d 100644
 --- a/drivers/pci/dmar.c
 +++ b/drivers/pci/dmar.c
-@@ -632,20 +632,31 @@ int alloc_iommu(struct dmar_drhd_unit *drhd)
+@@ -353,6 +353,7 @@ dmar_table_print_dmar_entry(struct acpi_dmar_header *header)
+ 	struct acpi_dmar_hardware_unit *drhd;
+ 	struct acpi_dmar_reserved_memory *rmrr;
+ 	struct acpi_dmar_atsr *atsr;
++	struct acpi_dmar_rhsa *rhsa;
+ 
+ 	switch (header->type) {
+ 	case ACPI_DMAR_TYPE_HARDWARE_UNIT:
+@@ -374,6 +375,12 @@ dmar_table_print_dmar_entry(struct acpi_dmar_header *header)
+ 		atsr = container_of(header, struct acpi_dmar_atsr, header);
+ 		printk(KERN_INFO PREFIX "ATSR flags: %#x\n", atsr->flags);
+ 		break;
++	case ACPI_DMAR_HARDWARE_AFFINITY:
++		rhsa = container_of(header, struct acpi_dmar_rhsa, header);
++		printk(KERN_INFO PREFIX "RHSA base: %#016Lx proximity domain: %#x\n",
++		       (unsigned long long)rhsa->base_address,
++		       rhsa->proximity_domain);
++		break;
+ 	}
+ }
+ 
+@@ -452,9 +459,13 @@ parse_dmar_table(void)
+ 			ret = dmar_parse_one_atsr(entry_header);
+ #endif
+ 			break;
++		case ACPI_DMAR_HARDWARE_AFFINITY:
++			/* We don't do anything with RHSA (yet?) */
++			break;
+ 		default:
+ 			printk(KERN_WARNING PREFIX
+-				"Unknown DMAR structure type\n");
++				"Unknown DMAR structure type %d\n",
++				entry_header->type);
+ 			ret = 0; /* for forward compatibility */
+ 			break;
+ 		}
+@@ -570,9 +581,6 @@ int __init dmar_table_init(void)
+ 		printk(KERN_INFO PREFIX "No ATSR found\n");
+ #endif
+ 
+-#ifdef CONFIG_INTR_REMAP
+-	parse_ioapics_under_ir();
+-#endif
+ 	return 0;
+ }
+ 
+@@ -632,20 +640,31 @@ int alloc_iommu(struct dmar_drhd_unit *drhd)
  	iommu->cap = dmar_readq(iommu->reg + DMAR_CAP_REG);
  	iommu->ecap = dmar_readq(iommu->reg + DMAR_ECAP_REG);
  
@@ -194,7 +574,7 @@ index 7b287cb..380b60e 100644
  	}
  #endif
  	iommu->agaw = agaw;
-@@ -665,7 +676,7 @@ int alloc_iommu(struct dmar_drhd_unit *drhd)
+@@ -665,7 +684,7 @@ int alloc_iommu(struct dmar_drhd_unit *drhd)
  	}
  
  	ver = readl(iommu->reg + DMAR_VER_REG);
@@ -203,7 +583,7 @@ index 7b287cb..380b60e 100644
  		(unsigned long long)drhd->reg_base_addr,
  		DMAR_VER_MAJOR(ver), DMAR_VER_MINOR(ver),
  		(unsigned long long)iommu->cap,
-@@ -675,7 +686,10 @@ int alloc_iommu(struct dmar_drhd_unit *drhd)
+@@ -675,7 +694,10 @@ int alloc_iommu(struct dmar_drhd_unit *drhd)
  
  	drhd->iommu = iommu;
  	return 0;
@@ -215,11 +595,75 @@ index 7b287cb..380b60e 100644
  	kfree(iommu);
  	return -1;
  }
+@@ -1212,7 +1234,7 @@ irqreturn_t dmar_fault(int irq, void *dev_id)
+ 				source_id, guest_addr);
+ 
+ 		fault_index++;
+-		if (fault_index > cap_num_fault_regs(iommu->cap))
++		if (fault_index >= cap_num_fault_regs(iommu->cap))
+ 			fault_index = 0;
+ 		spin_lock_irqsave(&iommu->register_lock, flag);
+ 	}
+@@ -1305,3 +1327,13 @@ int dmar_reenable_qi(struct intel_iommu *iommu)
+ 
+ 	return 0;
+ }
++
++/*
++ * Check interrupt remapping support in DMAR table description.
++ */
++int dmar_ir_support(void)
++{
++	struct acpi_table_dmar *dmar;
++	dmar = (struct acpi_table_dmar *)dmar_tbl;
++	return dmar->flags & 0x1;
++}
 diff --git a/drivers/pci/intel-iommu.c b/drivers/pci/intel-iommu.c
-index 2314ad7..09606e9 100644
+index 2314ad7..f89ce3f 100644
 --- a/drivers/pci/intel-iommu.c
 +++ b/drivers/pci/intel-iommu.c
-@@ -251,7 +251,8 @@ static inline int first_pte_in_page(struct dma_pte *pte)
+@@ -37,6 +37,7 @@
+ #include <linux/iommu.h>
+ #include <linux/intel-iommu.h>
+ #include <linux/sysdev.h>
++#include <linux/dmi.h>
+ #include <asm/cacheflush.h>
+ #include <asm/iommu.h>
+ #include "pci.h"
+@@ -46,6 +47,7 @@
+ 
+ #define IS_GFX_DEVICE(pdev) ((pdev->class >> 16) == PCI_BASE_CLASS_DISPLAY)
+ #define IS_ISA_DEVICE(pdev) ((pdev->class >> 8) == PCI_CLASS_BRIDGE_ISA)
++#define IS_AZALIA(pdev) ((pdev)->vendor == 0x8086 && (pdev)->device == 0x3a3e)
+ 
+ #define IOAPIC_RANGE_START	(0xfee00000)
+ #define IOAPIC_RANGE_END	(0xfeefffff)
+@@ -55,8 +57,14 @@
+ 
+ #define MAX_AGAW_WIDTH 64
+ 
+-#define DOMAIN_MAX_ADDR(gaw) ((((u64)1) << gaw) - 1)
+-#define DOMAIN_MAX_PFN(gaw)  ((((u64)1) << (gaw-VTD_PAGE_SHIFT)) - 1)
++#define __DOMAIN_MAX_PFN(gaw)  ((((uint64_t)1) << (gaw-VTD_PAGE_SHIFT)) - 1)
++#define __DOMAIN_MAX_ADDR(gaw) ((((uint64_t)1) << gaw) - 1)
++
++/* We limit DOMAIN_MAX_PFN to fit in an unsigned long, and DOMAIN_MAX_ADDR
++   to match. That way, we can use 'unsigned long' for PFNs with impunity. */
++#define DOMAIN_MAX_PFN(gaw)	((unsigned long) min_t(uint64_t, \
++				__DOMAIN_MAX_PFN(gaw), (unsigned long)-1))
++#define DOMAIN_MAX_ADDR(gaw)	(((uint64_t)__DOMAIN_MAX_PFN(gaw)) << VTD_PAGE_SHIFT)
+ 
+ #define IOVA_PFN(addr)		((addr) >> PAGE_SHIFT)
+ #define DMA_32BIT_PFN		IOVA_PFN(DMA_BIT_MASK(32))
+@@ -86,6 +94,7 @@ static inline unsigned long virt_to_dma_pfn(void *p)
+ /* global iommu list, set NULL for ignored DMAR units */
+ static struct intel_iommu **g_iommus;
+ 
++static void __init check_tylersburg_isoch(void);
+ static int rwbf_quirk;
+ 
+ /*
+@@ -251,7 +260,8 @@ static inline int first_pte_in_page(struct dma_pte *pte)
   * 	2. It maps to each iommu if successful.
   *	3. Each iommu mapps to this domain if successful.
   */
@@ -229,7 +673,125 @@ index 2314ad7..09606e9 100644
  
  /* devices under the same p2p bridge are owned in one domain */
  #define DOMAIN_FLAG_P2P_MULTIPLE_DEVICES (1 << 0)
-@@ -1309,7 +1310,6 @@ static void iommu_detach_domain(struct dmar_domain *domain,
+@@ -727,7 +737,7 @@ static struct dma_pte *pfn_to_dma_pte(struct dmar_domain *domain,
+ 				return NULL;
+ 
+ 			domain_flush_cache(domain, tmp_page, VTD_PAGE_SIZE);
+-			pteval = (virt_to_dma_pfn(tmp_page) << VTD_PAGE_SHIFT) | DMA_PTE_READ | DMA_PTE_WRITE;
++			pteval = ((uint64_t)virt_to_dma_pfn(tmp_page) << VTD_PAGE_SHIFT) | DMA_PTE_READ | DMA_PTE_WRITE;
+ 			if (cmpxchg64(&pte->val, 0ULL, pteval)) {
+ 				/* Someone else set it while we were thinking; use theirs. */
+ 				free_pgtable_page(tmp_page);
+@@ -777,9 +787,10 @@ static void dma_pte_clear_range(struct dmar_domain *domain,
+ 
+ 	BUG_ON(addr_width < BITS_PER_LONG && start_pfn >> addr_width);
+ 	BUG_ON(addr_width < BITS_PER_LONG && last_pfn >> addr_width);
++	BUG_ON(start_pfn > last_pfn);
+ 
+ 	/* we don't need lock here; nobody else touches the iova range */
+-	while (start_pfn <= last_pfn) {
++	do {
+ 		first_pte = pte = dma_pfn_level_pte(domain, start_pfn, 1);
+ 		if (!pte) {
+ 			start_pfn = align_to_level(start_pfn + 1, 2);
+@@ -793,7 +804,8 @@ static void dma_pte_clear_range(struct dmar_domain *domain,
+ 
+ 		domain_flush_cache(domain, first_pte,
+ 				   (void *)pte - (void *)first_pte);
+-	}
++
++	} while (start_pfn && start_pfn <= last_pfn);
+ }
+ 
+ /* free page table pages. last level pte should already be cleared */
+@@ -809,6 +821,7 @@ static void dma_pte_free_pagetable(struct dmar_domain *domain,
+ 
+ 	BUG_ON(addr_width < BITS_PER_LONG && start_pfn >> addr_width);
+ 	BUG_ON(addr_width < BITS_PER_LONG && last_pfn >> addr_width);
++	BUG_ON(start_pfn > last_pfn);
+ 
+ 	/* We don't need lock here; nobody else touches the iova range */
+ 	level = 2;
+@@ -819,7 +832,7 @@ static void dma_pte_free_pagetable(struct dmar_domain *domain,
+ 		if (tmp + level_size(level) - 1 > last_pfn)
+ 			return;
+ 
+-		while (tmp + level_size(level) - 1 <= last_pfn) {
++		do {
+ 			first_pte = pte = dma_pfn_level_pte(domain, tmp, level);
+ 			if (!pte) {
+ 				tmp = align_to_level(tmp + 1, level + 1);
+@@ -838,7 +851,7 @@ static void dma_pte_free_pagetable(struct dmar_domain *domain,
+ 			domain_flush_cache(domain, first_pte,
+ 					   (void *)pte - (void *)first_pte);
+ 			
+-		}
++		} while (tmp && tmp + level_size(level) - 1 <= last_pfn);
+ 		level++;
+ 	}
+ 	/* free pgd */
+@@ -1157,6 +1170,8 @@ static int iommu_init_domains(struct intel_iommu *iommu)
+ 	pr_debug("Number of Domains supportd <%ld>\n", ndomains);
+ 	nlongs = BITS_TO_LONGS(ndomains);
+ 
++	spin_lock_init(&iommu->lock);
++
+ 	/* TBD: there might be 64K domains,
+ 	 * consider other allocation for future chip
+ 	 */
+@@ -1169,12 +1184,9 @@ static int iommu_init_domains(struct intel_iommu *iommu)
+ 			GFP_KERNEL);
+ 	if (!iommu->domains) {
+ 		printk(KERN_ERR "Allocating domain array failed\n");
+-		kfree(iommu->domain_ids);
+ 		return -ENOMEM;
+ 	}
+ 
+-	spin_lock_init(&iommu->lock);
+-
+ 	/*
+ 	 * if Caching mode is set, then invalid translations are tagged
+ 	 * with domainid 0. Hence we need to pre-allocate it.
+@@ -1194,22 +1206,24 @@ void free_dmar_iommu(struct intel_iommu *iommu)
+ 	int i;
+ 	unsigned long flags;
+ 
+-	i = find_first_bit(iommu->domain_ids, cap_ndoms(iommu->cap));
+-	for (; i < cap_ndoms(iommu->cap); ) {
+-		domain = iommu->domains[i];
+-		clear_bit(i, iommu->domain_ids);
++	if ((iommu->domains) && (iommu->domain_ids)) {
++		i = find_first_bit(iommu->domain_ids, cap_ndoms(iommu->cap));
++		for (; i < cap_ndoms(iommu->cap); ) {
++			domain = iommu->domains[i];
++			clear_bit(i, iommu->domain_ids);
++
++			spin_lock_irqsave(&domain->iommu_lock, flags);
++			if (--domain->iommu_count == 0) {
++				if (domain->flags & DOMAIN_FLAG_VIRTUAL_MACHINE)
++					vm_domain_exit(domain);
++				else
++					domain_exit(domain);
++			}
++			spin_unlock_irqrestore(&domain->iommu_lock, flags);
+ 
+-		spin_lock_irqsave(&domain->iommu_lock, flags);
+-		if (--domain->iommu_count == 0) {
+-			if (domain->flags & DOMAIN_FLAG_VIRTUAL_MACHINE)
+-				vm_domain_exit(domain);
+-			else
+-				domain_exit(domain);
++			i = find_next_bit(iommu->domain_ids,
++				cap_ndoms(iommu->cap), i+1);
+ 		}
+-		spin_unlock_irqrestore(&domain->iommu_lock, flags);
+-
+-		i = find_next_bit(iommu->domain_ids,
+-			cap_ndoms(iommu->cap), i+1);
+ 	}
+ 
+ 	if (iommu->gcmd & DMA_GCMD_TE)
+@@ -1309,7 +1323,6 @@ static void iommu_detach_domain(struct dmar_domain *domain,
  }
  
  static struct iova_domain reserved_iova_list;
@@ -237,7 +799,7 @@ index 2314ad7..09606e9 100644
  static struct lock_class_key reserved_rbtree_key;
  
  static void dmar_init_reserved_ranges(void)
-@@ -1320,8 +1320,6 @@ static void dmar_init_reserved_ranges(void)
+@@ -1320,8 +1333,6 @@ static void dmar_init_reserved_ranges(void)
  
  	init_iova_domain(&reserved_iova_list, DMA_32BIT_PFN);
  
@@ -246,7 +808,17 @@ index 2314ad7..09606e9 100644
  	lockdep_set_class(&reserved_iova_list.iova_rbtree_lock,
  		&reserved_rbtree_key);
  
-@@ -1958,14 +1956,24 @@ static int iommu_prepare_identity_map(struct pci_dev *pdev,
+@@ -1924,6 +1935,9 @@ error:
+ }
+ 
+ static int iommu_identity_mapping;
++#define IDENTMAP_ALL		1
++#define IDENTMAP_GFX		2
++#define IDENTMAP_AZALIA		4
+ 
+ static int iommu_domain_identity_map(struct dmar_domain *domain,
+ 				     unsigned long long start,
+@@ -1958,14 +1972,35 @@ static int iommu_prepare_identity_map(struct pci_dev *pdev,
  	struct dmar_domain *domain;
  	int ret;
  
@@ -271,11 +843,22 @@ index 2314ad7..09606e9 100644
 +	printk(KERN_INFO
 +	       "IOMMU: Setting identity map for device %s [0x%Lx - 0x%Lx]\n",
 +	       pci_name(pdev), start, end);
++	
++	if (end >> agaw_to_width(domain->agaw)) {
++		WARN(1, "Your BIOS is broken; RMRR exceeds permitted address width (%d bits)\n"
++		     "BIOS vendor: %s; Ver: %s; Product Version: %s\n",
++		     agaw_to_width(domain->agaw),
++		     dmi_get_system_info(DMI_BIOS_VENDOR),
++		     dmi_get_system_info(DMI_BIOS_VERSION),
++		     dmi_get_system_info(DMI_PRODUCT_VERSION));
++		ret = -EIO;
++		goto error;
++	}
 +
  	ret = iommu_domain_identity_map(domain, start, end);
  	if (ret)
  		goto error;
-@@ -2016,23 +2024,6 @@ static inline void iommu_prepare_isa(void)
+@@ -2016,23 +2051,6 @@ static inline void iommu_prepare_isa(void)
  }
  #endif /* !CONFIG_DMAR_FLPY_WA */
  
@@ -299,16 +882,16 @@ index 2314ad7..09606e9 100644
  static int md_domain_init(struct dmar_domain *domain, int guest_width);
  
  static int __init si_domain_work_fn(unsigned long start_pfn,
-@@ -2047,7 +2038,7 @@ static int __init si_domain_work_fn(unsigned long start_pfn,
+@@ -2047,7 +2065,7 @@ static int __init si_domain_work_fn(unsigned long start_pfn,
  
  }
  
 -static int si_domain_init(void)
-+static int si_domain_init(int hw)
++static int __init si_domain_init(int hw)
  {
  	struct dmar_drhd_unit *drhd;
  	struct intel_iommu *iommu;
-@@ -2074,6 +2065,9 @@ static int si_domain_init(void)
+@@ -2074,6 +2092,9 @@ static int si_domain_init(void)
  
  	si_domain->flags = DOMAIN_FLAG_STATIC_IDENTITY;
  
@@ -318,7 +901,7 @@ index 2314ad7..09606e9 100644
  	for_each_online_node(nid) {
  		work_with_active_regions(nid, si_domain_work_fn, &ret);
  		if (ret)
-@@ -2100,15 +2094,23 @@ static int identity_mapping(struct pci_dev *pdev)
+@@ -2100,15 +2121,23 @@ static int identity_mapping(struct pci_dev *pdev)
  }
  
  static int domain_add_dev_info(struct dmar_domain *domain,
@@ -343,12 +926,29 @@ index 2314ad7..09606e9 100644
  	info->segment = pci_domain_nr(pdev->bus);
  	info->bus = pdev->bus->number;
  	info->devfn = pdev->devfn;
-@@ -2165,27 +2167,25 @@ static int iommu_should_identity_map(struct pci_dev *pdev, int startup)
+@@ -2126,8 +2155,14 @@ static int domain_add_dev_info(struct dmar_domain *domain,
+ 
+ static int iommu_should_identity_map(struct pci_dev *pdev, int startup)
+ {
+-	if (iommu_identity_mapping == 2)
+-		return IS_GFX_DEVICE(pdev);
++	if ((iommu_identity_mapping & IDENTMAP_AZALIA) && IS_AZALIA(pdev))
++		return 1;
++
++	if ((iommu_identity_mapping & IDENTMAP_GFX) && IS_GFX_DEVICE(pdev))
++		return 1;
++
++	if (!(iommu_identity_mapping & IDENTMAP_ALL))
++		return 0;
+ 
+ 	/*
+ 	 * We want to start off with all devices in the 1:1 domain, and
+@@ -2165,27 +2200,25 @@ static int iommu_should_identity_map(struct pci_dev *pdev, int startup)
  	return 1;
  }
  
 -static int iommu_prepare_static_identity_mapping(void)
-+static int iommu_prepare_static_identity_mapping(int hw)
++static int __init iommu_prepare_static_identity_mapping(int hw)
  {
  	struct pci_dev *pdev = NULL;
  	int ret;
@@ -377,7 +977,7 @@ index 2314ad7..09606e9 100644
  		}
  	}
  
-@@ -2199,14 +2199,6 @@ int __init init_dmars(void)
+@@ -2199,14 +2232,6 @@ int __init init_dmars(void)
  	struct pci_dev *pdev;
  	struct intel_iommu *iommu;
  	int i, ret;
@@ -392,7 +992,7 @@ index 2314ad7..09606e9 100644
  
  	/*
  	 * for each drhd
-@@ -2234,7 +2226,6 @@ int __init init_dmars(void)
+@@ -2234,7 +2259,6 @@ int __init init_dmars(void)
  	deferred_flush = kzalloc(g_num_of_iommus *
  		sizeof(struct deferred_flush_tables), GFP_KERNEL);
  	if (!deferred_flush) {
@@ -400,7 +1000,7 @@ index 2314ad7..09606e9 100644
  		ret = -ENOMEM;
  		goto error;
  	}
-@@ -2261,14 +2252,8 @@ int __init init_dmars(void)
+@@ -2261,14 +2285,8 @@ int __init init_dmars(void)
  			goto error;
  		}
  		if (!ecap_pass_through(iommu->ecap))
@@ -416,16 +1016,19 @@ index 2314ad7..09606e9 100644
  
  	/*
  	 * Start from the sane iommu hardware state.
-@@ -2323,64 +2308,57 @@ int __init init_dmars(void)
+@@ -2323,64 +2341,60 @@ int __init init_dmars(void)
  		}
  	}
  
 +	if (iommu_pass_through)
-+		iommu_identity_mapping = 1;
++		iommu_identity_mapping |= IDENTMAP_ALL;
++
 +#ifdef CONFIG_DMAR_BROKEN_GFX_WA
-+	else
-+		iommu_identity_mapping = 2;
++	iommu_identity_mapping |= IDENTMAP_GFX;
 +#endif
++
++	check_tylersburg_isoch();
++
  	/*
 -	 * If pass through is set and enabled, context entries of all pci
 -	 * devices are intialized by pass through translation type.
@@ -522,7 +1125,21 @@ index 2314ad7..09606e9 100644
  	/*
  	 * for each drhd
  	 *   enable fault log
-@@ -2454,8 +2432,7 @@ static struct iova *intel_alloc_iova(struct device *dev,
+@@ -2403,11 +2417,12 @@ int __init init_dmars(void)
+ 
+ 		iommu->flush.flush_context(iommu, 0, 0, 0, DMA_CCMD_GLOBAL_INVL);
+ 		iommu->flush.flush_iotlb(iommu, 0, 0, 0, DMA_TLB_GLOBAL_FLUSH);
+-		iommu_disable_protect_mem_regions(iommu);
+ 
+ 		ret = iommu_enable_translation(iommu);
+ 		if (ret)
+ 			goto error;
++
++		iommu_disable_protect_mem_regions(iommu);
+ 	}
+ 
+ 	return 0;
+@@ -2454,8 +2469,7 @@ static struct iova *intel_alloc_iova(struct device *dev,
  	return iova;
  }
  
@@ -532,7 +1149,7 @@ index 2314ad7..09606e9 100644
  {
  	struct dmar_domain *domain;
  	int ret;
-@@ -2483,6 +2460,18 @@ get_valid_domain_for_dev(struct pci_dev *pdev)
+@@ -2483,6 +2497,18 @@ get_valid_domain_for_dev(struct pci_dev *pdev)
  	return domain;
  }
  
@@ -551,7 +1168,7 @@ index 2314ad7..09606e9 100644
  static int iommu_dummy(struct pci_dev *pdev)
  {
  	return pdev->dev.archdata.iommu == DUMMY_DEVICE_DOMAIN_INFO;
-@@ -2525,10 +2514,10 @@ static int iommu_no_mapping(struct device *dev)
+@@ -2525,10 +2551,10 @@ static int iommu_no_mapping(struct device *dev)
  		 */
  		if (iommu_should_identity_map(pdev, 0)) {
  			int ret;
@@ -566,7 +1183,20 @@ index 2314ad7..09606e9 100644
  			if (!ret) {
  				printk(KERN_INFO "64bit %s uses identity mapping\n",
  				       pci_name(pdev));
-@@ -2733,12 +2722,6 @@ static void intel_unmap_page(struct device *dev, dma_addr_t dev_addr,
+@@ -2637,10 +2663,9 @@ static void flush_unmaps(void)
+ 			unsigned long mask;
+ 			struct iova *iova = deferred_flush[i].iova[j];
+ 
+-			mask = (iova->pfn_hi - iova->pfn_lo + 1) << PAGE_SHIFT;
+-			mask = ilog2(mask >> VTD_PAGE_SHIFT);
++			mask = ilog2(mm_to_dma_pfn(iova->pfn_hi - iova->pfn_lo + 1));
+ 			iommu_flush_dev_iotlb(deferred_flush[i].domain[j],
+-					iova->pfn_lo << PAGE_SHIFT, mask);
++					(uint64_t)iova->pfn_lo << PAGE_SHIFT, mask);
+ 			__free_iova(&deferred_flush[i].domain[j]->iovad, iova);
+ 		}
+ 		deferred_flush[i].next = 0;
+@@ -2733,12 +2758,6 @@ static void intel_unmap_page(struct device *dev, dma_addr_t dev_addr,
  	}
  }
  
@@ -579,7 +1209,7 @@ index 2314ad7..09606e9 100644
  static void *intel_alloc_coherent(struct device *hwdev, size_t size,
  				  dma_addr_t *dma_handle, gfp_t flags)
  {
-@@ -2771,7 +2754,7 @@ static void intel_free_coherent(struct device *hwdev, size_t size, void *vaddr,
+@@ -2771,7 +2790,7 @@ static void intel_free_coherent(struct device *hwdev, size_t size, void *vaddr,
  	size = PAGE_ALIGN(size);
  	order = get_order(size);
  
@@ -588,7 +1218,7 @@ index 2314ad7..09606e9 100644
  	free_pages((unsigned long)vaddr, order);
  }
  
-@@ -2807,11 +2790,18 @@ static void intel_unmap_sg(struct device *hwdev, struct scatterlist *sglist,
+@@ -2807,11 +2826,18 @@ static void intel_unmap_sg(struct device *hwdev, struct scatterlist *sglist,
  	/* free page tables */
  	dma_pte_free_pagetable(domain, start_pfn, last_pfn);
  
@@ -612,7 +1242,17 @@ index 2314ad7..09606e9 100644
  }
  
  static int intel_nontranslate_map_sg(struct device *hddev,
-@@ -3194,7 +3184,7 @@ int __init intel_iommu_init(void)
+@@ -3055,8 +3081,8 @@ static int init_iommu_hw(void)
+ 					   DMA_CCMD_GLOBAL_INVL);
+ 		iommu->flush.flush_iotlb(iommu, 0, 0, 0,
+ 					 DMA_TLB_GLOBAL_FLUSH);
+-		iommu_disable_protect_mem_regions(iommu);
+ 		iommu_enable_translation(iommu);
++		iommu_disable_protect_mem_regions(iommu);
+ 	}
+ 
+ 	return 0;
+@@ -3194,7 +3220,7 @@ int __init intel_iommu_init(void)
  	 * Check the need for DMA-remapping initialization now.
  	 * Above initialization will also be used by Interrupt-remapping.
  	 */
@@ -621,7 +1261,7 @@ index 2314ad7..09606e9 100644
  		return -ENODEV;
  
  	iommu_init_mempool();
-@@ -3214,14 +3204,7 @@ int __init intel_iommu_init(void)
+@@ -3214,14 +3240,7 @@ int __init intel_iommu_init(void)
  
  	init_timer(&unmap_timer);
  	force_iommu = 1;
@@ -637,7 +1277,7 @@ index 2314ad7..09606e9 100644
  
  	init_iommu_sysfs();
  
-@@ -3504,7 +3487,6 @@ static int intel_iommu_attach_device(struct iommu_domain *domain,
+@@ -3504,7 +3523,6 @@ static int intel_iommu_attach_device(struct iommu_domain *domain,
  	struct intel_iommu *iommu;
  	int addr_width;
  	u64 end;
@@ -645,7 +1285,7 @@ index 2314ad7..09606e9 100644
  
  	/* normally pdev is not mapped */
  	if (unlikely(domain_context_mapped(pdev))) {
-@@ -3536,12 +3518,7 @@ static int intel_iommu_attach_device(struct iommu_domain *domain,
+@@ -3536,12 +3554,7 @@ static int intel_iommu_attach_device(struct iommu_domain *domain,
  		return -EFAULT;
  	}
  
@@ -659,6 +1299,94 @@ index 2314ad7..09606e9 100644
  }
  
  static void intel_iommu_detach_device(struct iommu_domain *domain,
+@@ -3658,3 +3671,61 @@ static void __devinit quirk_iommu_rwbf(struct pci_dev *dev)
+ }
+ 
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2a40, quirk_iommu_rwbf);
++
++/* On Tylersburg chipsets, some BIOSes have been known to enable the
++   ISOCH DMAR unit for the Azalia sound device, but not give it any
++   TLB entries, which causes it to deadlock. Check for that.  We do
++   this in a function called from init_dmars(), instead of in a PCI
++   quirk, because we don't want to print the obnoxious "BIOS broken"
++   message if VT-d is actually disabled.
++*/
++static void __init check_tylersburg_isoch(void)
++{
++	struct pci_dev *pdev;
++	uint32_t vtisochctrl;
++
++	/* If there's no Azalia in the system anyway, forget it. */
++	pdev = pci_get_device(PCI_VENDOR_ID_INTEL, 0x3a3e, NULL);
++	if (!pdev)
++		return;
++	pci_dev_put(pdev);
++
++	/* System Management Registers. Might be hidden, in which case
++	   we can't do the sanity check. But that's OK, because the
++	   known-broken BIOSes _don't_ actually hide it, so far. */
++	pdev = pci_get_device(PCI_VENDOR_ID_INTEL, 0x342e, NULL);
++	if (!pdev)
++		return;
++
++	if (pci_read_config_dword(pdev, 0x188, &vtisochctrl)) {
++		pci_dev_put(pdev);
++		return;
++	}
++
++	pci_dev_put(pdev);
++
++	/* If Azalia DMA is routed to the non-isoch DMAR unit, fine. */
++	if (vtisochctrl & 1)
++		return;
++
++	/* Drop all bits other than the number of TLB entries */
++	vtisochctrl &= 0x1c;
++
++	/* If we have the recommended number of TLB entries (16), fine. */
++	if (vtisochctrl == 0x10)
++		return;
++
++	/* Zero TLB entries? You get to ride the short bus to school. */
++	if (!vtisochctrl) {
++		WARN(1, "Your BIOS is broken; DMA routed to ISOCH DMAR unit but no TLB space.\n"
++		     "BIOS vendor: %s; Ver: %s; Product Version: %s\n",
++		     dmi_get_system_info(DMI_BIOS_VENDOR),
++		     dmi_get_system_info(DMI_BIOS_VERSION),
++		     dmi_get_system_info(DMI_PRODUCT_VERSION));
++		iommu_identity_mapping |= IDENTMAP_AZALIA;
++		return;
++	}
++	
++	printk(KERN_WARNING "DMAR: Recommended TLB entries for ISOCH unit is 16; your BIOS set %d\n",
++	       vtisochctrl);
++}
+diff --git a/drivers/pci/intr_remapping.c b/drivers/pci/intr_remapping.c
+index 4f5b871..ac06514 100644
+--- a/drivers/pci/intr_remapping.c
++++ b/drivers/pci/intr_remapping.c
+@@ -611,6 +611,9 @@ int __init intr_remapping_supported(void)
+ 	if (disable_intremap)
+ 		return 0;
+ 
++	if (!dmar_ir_support())
++		return 0;
++
+ 	for_each_drhd_unit(drhd) {
+ 		struct intel_iommu *iommu = drhd->iommu;
+ 
+@@ -626,6 +629,11 @@ int __init enable_intr_remapping(int eim)
+ 	struct dmar_drhd_unit *drhd;
+ 	int setup = 0;
+ 
++	if (parse_ioapics_under_ir() != 1) {
++		printk(KERN_INFO "Not enable interrupt remapping\n");
++		return -1;
++	}
++
+ 	for_each_drhd_unit(drhd) {
+ 		struct intel_iommu *iommu = drhd->iommu;
+ 
 diff --git a/drivers/pci/iova.c b/drivers/pci/iova.c
 index 46dd440..7914951 100644
 --- a/drivers/pci/iova.c
@@ -729,6 +1457,17 @@ index 46dd440..7914951 100644
 -	spin_unlock_irqrestore(&from->iova_alloc_lock, flags);
 +	spin_unlock_irqrestore(&from->iova_rbtree_lock, flags);
  }
+diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
+index 482dc91..4f0a72a 100644
+--- a/include/linux/intel-iommu.h
++++ b/include/linux/intel-iommu.h
+@@ -360,4 +360,6 @@ extern void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 qdep,
+ 
+ extern int qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu);
+ 
++extern int dmar_ir_support(void);
++
+ #endif
 diff --git a/include/linux/iova.h b/include/linux/iova.h
 index 228f6c9..76a0759 100644
 --- a/include/linux/iova.h


Index: kernel.spec
===================================================================
RCS file: /cvs/pkgs/rpms/kernel/F-12/kernel.spec,v
retrieving revision 1.1842
retrieving revision 1.1843
diff -u -p -r1.1842 -r1.1843
--- kernel.spec	29 Sep 2009 19:58:01 -0000	1.1842
+++ kernel.spec	30 Sep 2009 19:12:04 -0000	1.1843
@@ -2075,6 +2075,9 @@ fi
 # and build.
 
 %changelog
+* Wed Sep 30 2009 David Woodhouse <David.Woodhouse at intel.com>
+- Update IOMMU code; mostly a bunch more workarounds for broken BIOSes.
+
 * Wed Sep 30 2009 Dave Airlie <airlied at redhat.com> 2.6.31.1-56
 - revert all the arjan patches until someone tests them.
 




More information about the fedora-extras-commits mailing list