debuggers.hg
changeset 16448:05cbf512b82b
x86: rmb() can be weakened according to new Intel spec.
Both Intel and AMD agree that, from a programmer's viewpoint:
Loads cannot be reordered relative to other loads.
Stores cannot be reordered relative to other stores.
Intel64 Architecture Memory Ordering White Paper
<http://developer.intel.com/products/processor/manuals/318147.pdf>
AMD64 Architecture Programmer's Manual, Volume 2: System Programming
<http://www.amd.com/us-en/assets/content_type/\
white_papers_and_tech_docs/24593.pdf>
Signed-off-by: Keir Fraser <keir.fraser@eu.citrix.com>
Both Intel and AMD agree that, from a programmer's viewpoint:
Loads cannot be reordered relative to other loads.
Stores cannot be reordered relative to other stores.
Intel64 Architecture Memory Ordering White Paper
<http://developer.intel.com/products/processor/manuals/318147.pdf>
AMD64 Architecture Programmer's Manual, Volume 2: System Programming
<http://www.amd.com/us-en/assets/content_type/\
white_papers_and_tech_docs/24593.pdf>
Signed-off-by: Keir Fraser <keir.fraser@eu.citrix.com>
author | Keir Fraser <keir.fraser@citrix.com> |
---|---|
date | Wed Nov 21 14:36:07 2007 +0000 (2007-11-21) |
parents | 7ccf7d373d0e |
children | 93d129d27f69 53dc1cf50506 |
files | xen/include/asm-x86/system.h xen/include/asm-x86/x86_32/system.h xen/include/asm-x86/x86_64/system.h |
line diff
1.1 --- a/xen/include/asm-x86/system.h Wed Nov 21 14:27:38 2007 +0000 1.2 +++ b/xen/include/asm-x86/system.h Wed Nov 21 14:36:07 2007 +0000 1.3 @@ -135,6 +135,21 @@ static always_inline unsigned long __cmp 1.4 1.5 #define __HAVE_ARCH_CMPXCHG 1.6 1.7 +/* 1.8 + * Both Intel and AMD agree that, from a programmer's viewpoint: 1.9 + * Loads cannot be reordered relative to other loads. 1.10 + * Stores cannot be reordered relative to other stores. 1.11 + * 1.12 + * Intel64 Architecture Memory Ordering White Paper 1.13 + * <http://developer.intel.com/products/processor/manuals/318147.pdf> 1.14 + * 1.15 + * AMD64 Architecture Programmer's Manual, Volume 2: System Programming 1.16 + * <http://www.amd.com/us-en/assets/content_type/\ 1.17 + * white_papers_and_tech_docs/24593.pdf> 1.18 + */ 1.19 +#define rmb() barrier() 1.20 +#define wmb() barrier() 1.21 + 1.22 #ifdef CONFIG_SMP 1.23 #define smp_mb() mb() 1.24 #define smp_rmb() rmb()
2.1 --- a/xen/include/asm-x86/x86_32/system.h Wed Nov 21 14:27:38 2007 +0000 2.2 +++ b/xen/include/asm-x86/x86_32/system.h Wed Nov 21 14:36:07 2007 +0000 2.3 @@ -98,9 +98,8 @@ static inline void atomic_write64(uint64 2.4 w = x; 2.5 } 2.6 2.7 -#define mb() asm volatile ( "lock; addl $0,0(%%esp)" : : : "memory" ) 2.8 -#define rmb() asm volatile ( "lock; addl $0,0(%%esp)" : : : "memory" ) 2.9 -#define wmb() asm volatile ( "" : : : "memory" ) 2.10 +#define mb() \ 2.11 + asm volatile ( "lock; addl $0,0(%%esp)" : : : "memory" ) 2.12 2.13 #define __save_flags(x) \ 2.14 asm volatile ( "pushfl ; popl %0" : "=g" (x) : )
3.1 --- a/xen/include/asm-x86/x86_64/system.h Wed Nov 21 14:27:38 2007 +0000 3.2 +++ b/xen/include/asm-x86/x86_64/system.h Wed Nov 21 14:36:07 2007 +0000 3.3 @@ -52,9 +52,8 @@ static inline void atomic_write64(uint64 3.4 *p = v; 3.5 } 3.6 3.7 -#define mb() asm volatile ( "mfence" : : : "memory" ) 3.8 -#define rmb() asm volatile ( "lfence" : : : "memory" ) 3.9 -#define wmb() asm volatile ( "" : : : "memory" ) 3.10 +#define mb() \ 3.11 + asm volatile ( "mfence" : : : "memory" ) 3.12 3.13 #define __save_flags(x) \ 3.14 asm volatile ( "pushfq ; popq %q0" : "=g" (x) : :"memory" )