thebeanogamer / rpms / qemu-kvm

Forked from rpms/qemu-kvm 5 months ago
Clone

Blame SOURCES/kvm-aio-Do-aio_notify_accept-only-during-blocking-aio_po.patch

26ba25
From 4809b6fbd13f8fc67daf1e37254d98e8fb9a9f20 Mon Sep 17 00:00:00 2001
26ba25
From: Fam Zheng <famz@redhat.com>
26ba25
Date: Tue, 9 Oct 2018 08:16:48 +0100
26ba25
Subject: [PATCH 04/17] aio: Do aio_notify_accept only during blocking aio_poll
26ba25
26ba25
RH-Author: Fam Zheng <famz@redhat.com>
26ba25
Message-id: <20181009081651.15463-3-famz@redhat.com>
26ba25
Patchwork-id: 82450
26ba25
O-Subject: [RHEL8/rhel qemu-kvm PATCH 2/5] aio: Do aio_notify_accept only during blocking aio_poll
26ba25
Bugzilla: 1623085
26ba25
RH-Acked-by: Thomas Huth <thuth@redhat.com>
26ba25
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
26ba25
RH-Acked-by: Danilo de Paula <ddepaula@redhat.com>
26ba25
26ba25
BZ: 1623085
26ba25
26ba25
An aio_notify() pairs with an aio_notify_accept(). The former should
26ba25
happen in the main thread or a vCPU thread, and the latter should be
26ba25
done in the IOThread.
26ba25
26ba25
There is one rare case that the main thread or vCPU thread may "steal"
26ba25
the aio_notify() event just raised by itself, in bdrv_set_aio_context()
26ba25
[1]. The sequence is like this:
26ba25
26ba25
    main thread                     IO Thread
26ba25
    ===============================================================
26ba25
    bdrv_drained_begin()
26ba25
      aio_disable_external(ctx)
26ba25
                                    aio_poll(ctx, true)
26ba25
                                      ctx->notify_me += 2
26ba25
    ...
26ba25
    bdrv_drained_end()
26ba25
      ...
26ba25
        aio_notify()
26ba25
    ...
26ba25
    bdrv_set_aio_context()
26ba25
      aio_poll(ctx, false)
26ba25
[1]     aio_notify_accept(ctx)
26ba25
                                      ppoll() /* Hang! */
26ba25
26ba25
[1] is problematic. It will clear the ctx->notifier event so that
26ba25
the blocked ppoll() will not return.
26ba25
26ba25
(For the curious, this bug was noticed when booting a number of VMs
26ba25
simultaneously in RHV.  One or two of the VMs will hit this race
26ba25
condition, making the VIRTIO device unresponsive to I/O commands. When
26ba25
it hangs, Seabios is busy waiting for a read request to complete (read
26ba25
MBR), right after initializing the virtio-blk-pci device, using 100%
26ba25
guest CPU. See also https://bugzilla.redhat.com/show_bug.cgi?id=1562750
26ba25
for the original bug analysis.)
26ba25
26ba25
aio_notify() only injects an event when ctx->notify_me is set,
26ba25
correspondingly aio_notify_accept() is only useful when ctx->notify_me
26ba25
_was_ set. Move the call to it into the "blocking" branch. This will
26ba25
effectively skip [1] and fix the hang.
26ba25
26ba25
Furthermore, blocking aio_poll is only allowed on home thread
26ba25
(in_aio_context_home_thread), because otherwise two blocking
26ba25
aio_poll()'s can steal each other's ctx->notifier event and cause
26ba25
hanging just like described above.
26ba25
26ba25
Cc: qemu-stable@nongnu.org
26ba25
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
26ba25
Signed-off-by: Fam Zheng <famz@redhat.com>
26ba25
Message-Id: <20180809132259.18402-3-famz@redhat.com>
26ba25
Signed-off-by: Fam Zheng <famz@redhat.com>
26ba25
(cherry picked from commit b37548fcd1b8ac2e88e185a395bef851f3fc4e65)
26ba25
Signed-off-by: Fam Zheng <famz@redhat.com>
26ba25
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
26ba25
---
26ba25
 util/aio-posix.c | 4 ++--
26ba25
 util/aio-win32.c | 3 ++-
26ba25
 2 files changed, 4 insertions(+), 3 deletions(-)
26ba25
26ba25
diff --git a/util/aio-posix.c b/util/aio-posix.c
26ba25
index f650c7c..f05d3a8 100644
26ba25
--- a/util/aio-posix.c
26ba25
+++ b/util/aio-posix.c
26ba25
@@ -591,6 +591,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
26ba25
      * so disable the optimization now.
26ba25
      */
26ba25
     if (blocking) {
26ba25
+        assert(in_aio_context_home_thread(ctx));
26ba25
         atomic_add(&ctx->notify_me, 2);
26ba25
     }
26ba25
 
26ba25
@@ -633,6 +634,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
26ba25
 
26ba25
     if (blocking) {
26ba25
         atomic_sub(&ctx->notify_me, 2);
26ba25
+        aio_notify_accept(ctx);
26ba25
     }
26ba25
 
26ba25
     /* Adjust polling time */
26ba25
@@ -676,8 +678,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
26ba25
         }
26ba25
     }
26ba25
 
26ba25
-    aio_notify_accept(ctx);
26ba25
-
26ba25
     /* if we have any readable fds, dispatch event */
26ba25
     if (ret > 0) {
26ba25
         for (i = 0; i < npfd; i++) {
26ba25
diff --git a/util/aio-win32.c b/util/aio-win32.c
26ba25
index a67b00c..ac5524c 100644
26ba25
--- a/util/aio-win32.c
26ba25
+++ b/util/aio-win32.c
26ba25
@@ -373,11 +373,12 @@ bool aio_poll(AioContext *ctx, bool blocking)
26ba25
         ret = WaitForMultipleObjects(count, events, FALSE, timeout);
26ba25
         if (blocking) {
26ba25
             assert(first);
26ba25
+            assert(in_aio_context_home_thread(ctx));
26ba25
             atomic_sub(&ctx->notify_me, 2);
26ba25
+            aio_notify_accept(ctx);
26ba25
         }
26ba25
 
26ba25
         if (first) {
26ba25
-            aio_notify_accept(ctx);
26ba25
             progress |= aio_bh_poll(ctx);
26ba25
             first = false;
26ba25
         }
26ba25
-- 
26ba25
1.8.3.1
26ba25