cryptospore / rpms / qemu-kvm

Forked from rpms/qemu-kvm 2 years ago
Clone

Blame SOURCES/kvm-atomics-add-explicit-compiler-fence-in-__atomic-memo.patch

05bba0
From d37475eb567b61ce6a18f9fcbf35eb929be8d99f Mon Sep 17 00:00:00 2001
a13b82
From: Paolo Bonzini <pbonzini@redhat.com>
a13b82
Date: Fri, 19 Jun 2015 10:45:29 +0200
a13b82
Subject: [PATCH] atomics: add explicit compiler fence in __atomic memory
a13b82
 barriers
a13b82
a13b82
Message-id: <1434710730-26183-1-git-send-email-pbonzini@redhat.com>
a13b82
Patchwork-id: 66333
a13b82
O-Subject: [RHEL7.2/7.1.z qemu-kvm PATCH] atomics: add explicit compiler fence in __atomic memory barriers
05bba0
Bugzilla: 1142857
a13b82
RH-Acked-by: Fam Zheng <famz@redhat.com>
a13b82
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
a13b82
RH-Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
a13b82
05bba0
Bugzilla: 1142857 (aka 8*10^6/7)
05bba0
05bba0
Brew build: 9393725
05bba0
a13b82
__atomic_thread_fence does not include a compiler barrier; in the
a13b82
C++11 memory model, fences take effect in combination with other
a13b82
atomic operations.  GCC implements this by making __atomic_load and
a13b82
__atomic_store access memory as if the pointer was volatile, and
a13b82
leaves no trace whatsoever of acquire and release fences in the
a13b82
compiler's intermediate representation.
a13b82
a13b82
In QEMU, we want memory barriers to act on all memory, but at the same
a13b82
time we would like to use __atomic_thread_fence for portability reasons.
a13b82
Add compiler barriers manually around the __atomic_thread_fence.
a13b82
a13b82
Thanks to Uli and Kevin for analyzing this bug!
a13b82
a13b82
Message-Id: <1433334080-14912-1-git-send-email-pbonzini@redhat.com>
a13b82
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
a13b82
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
a13b82
(cherry picked from commit 3bbf572345c65813f86a8fc434ea1b23beb08e16)
a13b82
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
a13b82
---
a13b82
 include/qemu/atomic.h | 12 +++++++++---
a13b82
 1 file changed, 9 insertions(+), 3 deletions(-)
a13b82
a13b82
diff --git a/include/qemu/atomic.h b/include/qemu/atomic.h
a13b82
index 0aa8913..690d0d6 100644
a13b82
--- a/include/qemu/atomic.h
a13b82
+++ b/include/qemu/atomic.h
a13b82
@@ -99,7 +99,13 @@
a13b82
 
a13b82
 #ifndef smp_wmb
a13b82
 #ifdef __ATOMIC_RELEASE
a13b82
-#define smp_wmb()   __atomic_thread_fence(__ATOMIC_RELEASE)
a13b82
+/* __atomic_thread_fence does not include a compiler barrier; instead,
a13b82
+ * the barrier is part of __atomic_load/__atomic_store's "volatile-like"
a13b82
+ * semantics. If smp_wmb() is a no-op, absence of the barrier means that
a13b82
+ * the compiler is free to reorder stores on each side of the barrier.
a13b82
+ * Add one here, and similarly in smp_rmb() and smp_read_barrier_depends().
a13b82
+ */
a13b82
+#define smp_wmb()   ({ barrier(); __atomic_thread_fence(__ATOMIC_RELEASE); barrier(); })
a13b82
 #else
a13b82
 #define smp_wmb()   __sync_synchronize()
a13b82
 #endif
a13b82
@@ -107,7 +113,7 @@
a13b82
 
a13b82
 #ifndef smp_rmb
a13b82
 #ifdef __ATOMIC_ACQUIRE
a13b82
-#define smp_rmb()   __atomic_thread_fence(__ATOMIC_ACQUIRE)
a13b82
+#define smp_rmb()   ({ barrier(); __atomic_thread_fence(__ATOMIC_ACQUIRE); barrier(); })
a13b82
 #else
a13b82
 #define smp_rmb()   __sync_synchronize()
a13b82
 #endif
a13b82
@@ -115,7 +121,7 @@
a13b82
 
a13b82
 #ifndef smp_read_barrier_depends
a13b82
 #ifdef __ATOMIC_CONSUME
a13b82
-#define smp_read_barrier_depends()   __atomic_thread_fence(__ATOMIC_CONSUME)
a13b82
+#define smp_read_barrier_depends()   ({ barrier(); __atomic_thread_fence(__ATOMIC_CONSUME); barrier(); })
a13b82
 #else
a13b82
 #define smp_read_barrier_depends()   barrier()
a13b82
 #endif
a13b82
-- 
a13b82
1.8.3.1
a13b82