summaryrefslogtreecommitdiffstats
path: root/arch/powerpc/include/asm/mmu_context.h
diff options
context:
space:
mode:
authorBalbir Singh <bsingharora@gmail.com>2016-09-06 16:27:31 +1000
committerMichael Ellerman <mpe@ellerman.id.au>2016-09-29 15:14:44 +1000
commit2e5bbb5461f138cac631fe21b4ad956feabfba22 (patch)
treeeb89de095b80a8f419022bb05ec40cf16a6cf3a7 /arch/powerpc/include/asm/mmu_context.h
parent360aebd85a4c946764f6301d68de2a817fad5159 (diff)
downloadlinux-2e5bbb5461f138cac631fe21b4ad956feabfba22.tar.bz2
KVM: PPC: Book3S HV: Migrate pinned pages out of CMA
When PCI Device pass-through is enabled via VFIO, KVM-PPC will pin pages using get_user_pages_fast(). One of the downsides of the pinning is that the page could be in CMA region. The CMA region is used for other allocations like the hash page table. Ideally we want the pinned pages to be from non CMA region. This patch (currently only for KVM PPC with VFIO) forcefully migrates the pages out (huge pages are omitted for the moment). There are more efficient ways of doing this, but that might be elaborate and might impact a larger audience beyond just the kvm ppc implementation. The magic is in new_iommu_non_cma_page() which allocates the new page from a non CMA region. I've tested the patches lightly at my end. The full solution requires migration of THP pages in the CMA region. That work will be done incrementally on top of this. Signed-off-by: Balbir Singh <bsingharora@gmail.com> Acked-by: Alexey Kardashevskiy <aik@ozlabs.ru> [mpe: Merged via powerpc tree as that's where the changes are] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Diffstat (limited to 'arch/powerpc/include/asm/mmu_context.h')
-rw-r--r--arch/powerpc/include/asm/mmu_context.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index 9d2cd0c36ec2..475d1be39191 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -18,6 +18,7 @@ extern void destroy_context(struct mm_struct *mm);
#ifdef CONFIG_SPAPR_TCE_IOMMU
struct mm_iommu_table_group_mem_t;
+extern int isolate_lru_page(struct page *page); /* from internal.h */
extern bool mm_iommu_preregistered(void);
extern long mm_iommu_get(unsigned long ua, unsigned long entries,
struct mm_iommu_table_group_mem_t **pmem);