debuggers.hg
changeset 22827:3decd02e0b18
PoD,hap: Fix logdirty mode when using hardware assisted paging
When writing a writable p2m entry for a pfn, we need to mark the pfn
dirty to avoid corruption when doing live migration.
Marking the page dirty exposes another issue, where there are
excessive sweeps for zero pages if there's a mismatch between PoD
entries and cache entries. Only sweep for zero pages if we actually
need more memory.
Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
Acked-by: Tim Deegan <Tim.Deegan@citrix.com>
When writing a writable p2m entry for a pfn, we need to mark the pfn
dirty to avoid corruption when doing live migration.
Marking the page dirty exposes another issue, where there are
excessive sweeps for zero pages if there's a mismatch between PoD
entries and cache entries. Only sweep for zero pages if we actually
need more memory.
Signed-off-by: George Dunlap <george.dunlap@eu.citrix.com>
Acked-by: Tim Deegan <Tim.Deegan@citrix.com>
author | George Dunlap <george.dunlap@eu.citrix.com> |
---|---|
date | Mon Jan 17 14:29:01 2011 +0000 (2011-01-17) |
parents | 97ab84aca65c |
children | fe8a177ae9cb |
files | xen/arch/x86/mm/p2m.c |
line diff
1.1 --- a/xen/arch/x86/mm/p2m.c Mon Jan 17 14:24:13 2011 +0000 1.2 +++ b/xen/arch/x86/mm/p2m.c Mon Jan 17 14:29:01 2011 +0000 1.3 @@ -1142,14 +1142,22 @@ p2m_pod_demand_populate(struct p2m_domai 1.4 return 0; 1.5 } 1.6 1.7 - /* If we're low, start a sweep */ 1.8 - if ( order == 9 && page_list_empty(&p2m->pod.super) ) 1.9 - p2m_pod_emergency_sweep_super(p2m); 1.10 - 1.11 - if ( page_list_empty(&p2m->pod.single) && 1.12 - ( ( order == 0 ) 1.13 - || (order == 9 && page_list_empty(&p2m->pod.super) ) ) ) 1.14 - p2m_pod_emergency_sweep(p2m); 1.15 + /* Once we've ballooned down enough that we can fill the remaining 1.16 + * PoD entries from the cache, don't sweep even if the particular 1.17 + * list we want to use is empty: that can lead to thrashing zero pages 1.18 + * through the cache for no good reason. */ 1.19 + if ( p2m->pod.entry_count > p2m->pod.count ) 1.20 + { 1.21 + 1.22 + /* If we're low, start a sweep */ 1.23 + if ( order == 9 && page_list_empty(&p2m->pod.super) ) 1.24 + p2m_pod_emergency_sweep_super(p2m); 1.25 + 1.26 + if ( page_list_empty(&p2m->pod.single) && 1.27 + ( ( order == 0 ) 1.28 + || (order == 9 && page_list_empty(&p2m->pod.super) ) ) ) 1.29 + p2m_pod_emergency_sweep(p2m); 1.30 + } 1.31 1.32 /* Keep track of the highest gfn demand-populated by a guest fault */ 1.33 if ( q == p2m_guest && gfn > p2m->pod.max_guest ) 1.34 @@ -1176,7 +1184,10 @@ p2m_pod_demand_populate(struct p2m_domai 1.35 set_p2m_entry(p2m, gfn_aligned, mfn, order, p2m_ram_rw, p2m->default_access); 1.36 1.37 for( i = 0; i < (1UL << order); i++ ) 1.38 + { 1.39 set_gpfn_from_mfn(mfn_x(mfn) + i, gfn_aligned + i); 1.40 + paging_mark_dirty(d, mfn_x(mfn) + i); 1.41 + } 1.42 1.43 p2m->pod.entry_count -= (1 << order); /* Lock: p2m */ 1.44 BUG_ON(p2m->pod.entry_count < 0);