yeahuh / rpms / qemu-kvm

Forked from rpms/qemu-kvm 2 years ago
Clone

Blame SOURCES/kvm-aio-Do-aio_notify_accept-only-during-blocking-aio_po.patch

ae23c9
From 4809b6fbd13f8fc67daf1e37254d98e8fb9a9f20 Mon Sep 17 00:00:00 2001
ae23c9
From: Fam Zheng <famz@redhat.com>
ae23c9
Date: Tue, 9 Oct 2018 08:16:48 +0100
ae23c9
Subject: [PATCH 04/17] aio: Do aio_notify_accept only during blocking aio_poll
ae23c9
ae23c9
RH-Author: Fam Zheng <famz@redhat.com>
ae23c9
Message-id: <20181009081651.15463-3-famz@redhat.com>
ae23c9
Patchwork-id: 82450
ae23c9
O-Subject: [RHEL8/rhel qemu-kvm PATCH 2/5] aio: Do aio_notify_accept only during blocking aio_poll
ae23c9
Bugzilla: 1623085
ae23c9
RH-Acked-by: Thomas Huth <thuth@redhat.com>
ae23c9
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
ae23c9
RH-Acked-by: Danilo de Paula <ddepaula@redhat.com>
ae23c9
ae23c9
BZ: 1623085
ae23c9
ae23c9
An aio_notify() pairs with an aio_notify_accept(). The former should
ae23c9
happen in the main thread or a vCPU thread, and the latter should be
ae23c9
done in the IOThread.
ae23c9
ae23c9
There is one rare case that the main thread or vCPU thread may "steal"
ae23c9
the aio_notify() event just raised by itself, in bdrv_set_aio_context()
ae23c9
[1]. The sequence is like this:
ae23c9
ae23c9
    main thread                     IO Thread
ae23c9
    ===============================================================
ae23c9
    bdrv_drained_begin()
ae23c9
      aio_disable_external(ctx)
ae23c9
                                    aio_poll(ctx, true)
ae23c9
                                      ctx->notify_me += 2
ae23c9
    ...
ae23c9
    bdrv_drained_end()
ae23c9
      ...
ae23c9
        aio_notify()
ae23c9
    ...
ae23c9
    bdrv_set_aio_context()
ae23c9
      aio_poll(ctx, false)
ae23c9
[1]     aio_notify_accept(ctx)
ae23c9
                                      ppoll() /* Hang! */
ae23c9
ae23c9
[1] is problematic. It will clear the ctx->notifier event so that
ae23c9
the blocked ppoll() will not return.
ae23c9
ae23c9
(For the curious, this bug was noticed when booting a number of VMs
ae23c9
simultaneously in RHV.  One or two of the VMs will hit this race
ae23c9
condition, making the VIRTIO device unresponsive to I/O commands. When
ae23c9
it hangs, Seabios is busy waiting for a read request to complete (read
ae23c9
MBR), right after initializing the virtio-blk-pci device, using 100%
ae23c9
guest CPU. See also https://bugzilla.redhat.com/show_bug.cgi?id=1562750
ae23c9
for the original bug analysis.)
ae23c9
ae23c9
aio_notify() only injects an event when ctx->notify_me is set,
ae23c9
correspondingly aio_notify_accept() is only useful when ctx->notify_me
ae23c9
_was_ set. Move the call to it into the "blocking" branch. This will
ae23c9
effectively skip [1] and fix the hang.
ae23c9
ae23c9
Furthermore, blocking aio_poll is only allowed on home thread
ae23c9
(in_aio_context_home_thread), because otherwise two blocking
ae23c9
aio_poll()'s can steal each other's ctx->notifier event and cause
ae23c9
hanging just like described above.
ae23c9
ae23c9
Cc: qemu-stable@nongnu.org
ae23c9
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
ae23c9
Signed-off-by: Fam Zheng <famz@redhat.com>
ae23c9
Message-Id: <20180809132259.18402-3-famz@redhat.com>
ae23c9
Signed-off-by: Fam Zheng <famz@redhat.com>
ae23c9
(cherry picked from commit b37548fcd1b8ac2e88e185a395bef851f3fc4e65)
ae23c9
Signed-off-by: Fam Zheng <famz@redhat.com>
ae23c9
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
ae23c9
---
ae23c9
 util/aio-posix.c | 4 ++--
ae23c9
 util/aio-win32.c | 3 ++-
ae23c9
 2 files changed, 4 insertions(+), 3 deletions(-)
ae23c9
ae23c9
diff --git a/util/aio-posix.c b/util/aio-posix.c
ae23c9
index f650c7c..f05d3a8 100644
ae23c9
--- a/util/aio-posix.c
ae23c9
+++ b/util/aio-posix.c
ae23c9
@@ -591,6 +591,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
ae23c9
      * so disable the optimization now.
ae23c9
      */
ae23c9
     if (blocking) {
ae23c9
+        assert(in_aio_context_home_thread(ctx));
ae23c9
         atomic_add(&ctx->notify_me, 2);
ae23c9
     }
ae23c9
 
ae23c9
@@ -633,6 +634,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
ae23c9
 
ae23c9
     if (blocking) {
ae23c9
         atomic_sub(&ctx->notify_me, 2);
ae23c9
+        aio_notify_accept(ctx);
ae23c9
     }
ae23c9
 
ae23c9
     /* Adjust polling time */
ae23c9
@@ -676,8 +678,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
ae23c9
         }
ae23c9
     }
ae23c9
 
ae23c9
-    aio_notify_accept(ctx);
ae23c9
-
ae23c9
     /* if we have any readable fds, dispatch event */
ae23c9
     if (ret > 0) {
ae23c9
         for (i = 0; i < npfd; i++) {
ae23c9
diff --git a/util/aio-win32.c b/util/aio-win32.c
ae23c9
index a67b00c..ac5524c 100644
ae23c9
--- a/util/aio-win32.c
ae23c9
+++ b/util/aio-win32.c
ae23c9
@@ -373,11 +373,12 @@ bool aio_poll(AioContext *ctx, bool blocking)
ae23c9
         ret = WaitForMultipleObjects(count, events, FALSE, timeout);
ae23c9
         if (blocking) {
ae23c9
             assert(first);
ae23c9
+            assert(in_aio_context_home_thread(ctx));
ae23c9
             atomic_sub(&ctx->notify_me, 2);
ae23c9
+            aio_notify_accept(ctx);
ae23c9
         }
ae23c9
 
ae23c9
         if (first) {
ae23c9
-            aio_notify_accept(ctx);
ae23c9
             progress |= aio_bh_poll(ctx);
ae23c9
             first = false;
ae23c9
         }
ae23c9
-- 
ae23c9
1.8.3.1
ae23c9