xen/pvh/mmu: Use PV TLB instead of native.
authorMukesh Rathor <mukesh.rathor@oracle.com>
Fri, 3 Jan 2014 14:48:08 +0000 (09:48 -0500)
committerKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Mon, 6 Jan 2014 15:44:07 +0000 (10:44 -0500)
We also optimize one - the TLB flush. The native operation would
needlessly IPI offline VCPUs causing extra wakeups. Using the
Xen one avoids that and lets the hypervisor determine which
VCPU needs the TLB flush.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
arch/x86/xen/mmu.c

index 490ddb354590442cb4cc3b4ca033391334b08f56..c1d406f35523143f7fc21f41a71dc0658c5e1823 100644 (file)
@@ -2222,6 +2222,15 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
 void __init xen_init_mmu_ops(void)
 {
        x86_init.paging.pagetable_init = xen_pagetable_init;
+
+       /* Optimization - we can use the HVM one but it has no idea which
+        * VCPUs are descheduled - which means that it will needlessly IPI
+        * them. Xen knows so let it do the job.
+        */
+       if (xen_feature(XENFEAT_auto_translated_physmap)) {
+               pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
+               return;
+       }
        pv_mmu_ops = xen_mmu_ops;
 
        memset(dummy_mapping, 0xff, PAGE_SIZE);