thebeanogamer / rpms / qemu-kvm

Forked from rpms/qemu-kvm 5 months ago
Clone

Blame SOURCES/kvm-aio-wait-delegate-polling-of-main-AioContext-if-BQL-.patch

1072c8
From b474155fdc38f86f516c14ba9a6f934616d589ef Mon Sep 17 00:00:00 2001
1072c8
From: Andrew Jones <drjones@redhat.com>
1072c8
Date: Wed, 4 Aug 2021 03:27:22 -0400
1072c8
Subject: [PATCH 1/2] aio-wait: delegate polling of main AioContext if BQL not
1072c8
 held
1072c8
1072c8
RH-Author: Andrew Jones <drjones@redhat.com>
1072c8
Message-id: <20210729134448.4995-2-drjones@redhat.com>
1072c8
Patchwork-id: 101935
1072c8
O-Subject: [RHEL-8.5.0 qemu-kvm PATCH v2 1/2] aio-wait: delegate polling of main AioContext if BQL not held
1072c8
Bugzilla: 1969848
1072c8
RH-Acked-by: Gavin Shan <gshan@redhat.com>
1072c8
RH-Acked-by: Auger Eric <eric.auger@redhat.com>
1072c8
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
1072c8
1072c8
From: Paolo Bonzini <pbonzini@redhat.com>
1072c8
1072c8
Any thread that is not a iothread returns NULL for qemu_get_current_aio_context().
1072c8
As a result, it would also return true for
1072c8
in_aio_context_home_thread(qemu_get_aio_context()), causing
1072c8
AIO_WAIT_WHILE to invoke aio_poll() directly.  This is incorrect
1072c8
if the BQL is not held, because aio_poll() does not expect to
1072c8
run concurrently from multiple threads, and it can actually
1072c8
happen when savevm writes to the vmstate file from the
1072c8
migration thread.
1072c8
1072c8
Therefore, restrict in_aio_context_home_thread to return true
1072c8
for the main AioContext only if the BQL is held.
1072c8
1072c8
The function is moved to aio-wait.h because it is mostly used
1072c8
there and to avoid a circular reference between main-loop.h
1072c8
and block/aio.h.
1072c8
1072c8
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
1072c8
Message-Id: <20200407140746.8041-5-pbonzini@redhat.com>
1072c8
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
1072c8
(cherry picked from commit 3c18a92dc4b55ca8cc37a755ed119f11c0f34099)
1072c8
Signed-off-by: Andrew Jones <drjones@redhat.com>
1072c8
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
1072c8
---
1072c8
 include/block/aio-wait.h | 22 ++++++++++++++++++++++
1072c8
 include/block/aio.h      | 29 ++++++++++-------------------
1072c8
 2 files changed, 32 insertions(+), 19 deletions(-)
1072c8
1072c8
diff --git a/include/block/aio-wait.h b/include/block/aio-wait.h
1072c8
index afeeb18f95..716d2639df 100644
1072c8
--- a/include/block/aio-wait.h
1072c8
+++ b/include/block/aio-wait.h
1072c8
@@ -26,6 +26,7 @@
1072c8
 #define QEMU_AIO_WAIT_H
1072c8
 
1072c8
 #include "block/aio.h"
1072c8
+#include "qemu/main-loop.h"
1072c8
 
1072c8
 /**
1072c8
  * AioWait:
1072c8
@@ -124,4 +125,25 @@ void aio_wait_kick(void);
1072c8
  */
1072c8
 void aio_wait_bh_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque);
1072c8
 
1072c8
+/**
1072c8
+ * in_aio_context_home_thread:
1072c8
+ * @ctx: the aio context
1072c8
+ *
1072c8
+ * Return whether we are running in the thread that normally runs @ctx.  Note
1072c8
+ * that acquiring/releasing ctx does not affect the outcome, each AioContext
1072c8
+ * still only has one home thread that is responsible for running it.
1072c8
+ */
1072c8
+static inline bool in_aio_context_home_thread(AioContext *ctx)
1072c8
+{
1072c8
+    if (ctx == qemu_get_current_aio_context()) {
1072c8
+        return true;
1072c8
+    }
1072c8
+
1072c8
+    if (ctx == qemu_get_aio_context()) {
1072c8
+        return qemu_mutex_iothread_locked();
1072c8
+    } else {
1072c8
+        return false;
1072c8
+    }
1072c8
+}
1072c8
+
1072c8
 #endif /* QEMU_AIO_WAIT_H */
1072c8
diff --git a/include/block/aio.h b/include/block/aio.h
1072c8
index 6b0d52f732..9d28e247df 100644
1072c8
--- a/include/block/aio.h
1072c8
+++ b/include/block/aio.h
1072c8
@@ -60,12 +60,16 @@ struct AioContext {
1072c8
     QLIST_HEAD(, AioHandler) aio_handlers;
1072c8
 
1072c8
     /* Used to avoid unnecessary event_notifier_set calls in aio_notify;
1072c8
-     * accessed with atomic primitives.  If this field is 0, everything
1072c8
-     * (file descriptors, bottom halves, timers) will be re-evaluated
1072c8
-     * before the next blocking poll(), thus the event_notifier_set call
1072c8
-     * can be skipped.  If it is non-zero, you may need to wake up a
1072c8
-     * concurrent aio_poll or the glib main event loop, making
1072c8
-     * event_notifier_set necessary.
1072c8
+     * only written from the AioContext home thread, or under the BQL in
1072c8
+     * the case of the main AioContext.  However, it is read from any
1072c8
+     * thread so it is still accessed with atomic primitives.
1072c8
+     *
1072c8
+     * If this field is 0, everything (file descriptors, bottom halves,
1072c8
+     * timers) will be re-evaluated before the next blocking poll() or
1072c8
+     * io_uring wait; therefore, the event_notifier_set call can be
1072c8
+     * skipped.  If it is non-zero, you may need to wake up a concurrent
1072c8
+     * aio_poll or the glib main event loop, making event_notifier_set
1072c8
+     * necessary.
1072c8
      *
1072c8
      * Bit 0 is reserved for GSource usage of the AioContext, and is 1
1072c8
      * between a call to aio_ctx_prepare and the next call to aio_ctx_check.
1072c8
@@ -580,19 +584,6 @@ void aio_co_enter(AioContext *ctx, struct Coroutine *co);
1072c8
  */
1072c8
 AioContext *qemu_get_current_aio_context(void);
1072c8
 
1072c8
-/**
1072c8
- * in_aio_context_home_thread:
1072c8
- * @ctx: the aio context
1072c8
- *
1072c8
- * Return whether we are running in the thread that normally runs @ctx.  Note
1072c8
- * that acquiring/releasing ctx does not affect the outcome, each AioContext
1072c8
- * still only has one home thread that is responsible for running it.
1072c8
- */
1072c8
-static inline bool in_aio_context_home_thread(AioContext *ctx)
1072c8
-{
1072c8
-    return ctx == qemu_get_current_aio_context();
1072c8
-}
1072c8
-
1072c8
 /**
1072c8
  * aio_context_setup:
1072c8
  * @ctx: the aio context
1072c8
-- 
1072c8
2.27.0
1072c8