yeahuh / rpms / qemu-kvm

Forked from rpms/qemu-kvm 2 years ago
Clone

Blame SOURCES/kvm-block-Use-normal-drain-for-bdrv_set_aio_context.patch

4ec855
From cf6bc30f7b525f0d646db62e49cbf02f3f28a1f2 Mon Sep 17 00:00:00 2001
4ec855
From: Kevin Wolf <kwolf@redhat.com>
4ec855
Date: Wed, 14 Aug 2019 08:42:29 +0100
4ec855
Subject: [PATCH 06/10] block: Use normal drain for bdrv_set_aio_context()
4ec855
4ec855
RH-Author: Kevin Wolf <kwolf@redhat.com>
4ec855
Message-id: <20190814084229.6458-6-kwolf@redhat.com>
4ec855
Patchwork-id: 89968
4ec855
O-Subject: [RHEL-8.1.0 qemu-kvm PATCH 5/5] block: Use normal drain for bdrv_set_aio_context()
4ec855
Bugzilla: 1716349
4ec855
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
4ec855
RH-Acked-by: Max Reitz <mreitz@redhat.com>
4ec855
RH-Acked-by: Paolo Bonzini <pbonzini@redhat.com>
4ec855
4ec855
Now that bdrv_set_aio_context() works inside drained sections, it can
4ec855
also use the real drain function instead of open coding something
4ec855
similar.
4ec855
4ec855
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
4ec855
(cherry picked from commit d70d595429ecd9ac4917e53453dd8979db8e5ffd)
4ec855
4ec855
RHEL: This conflicts because we didn't backport the removal of the
4ec855
polling loop. The conflict is resolved so that the polling loop moves to
4ec855
above the drain and any requests a BH would spawn would still be
4ec855
correctly drained afterwards. The changed order alone would have
4ec855
compensated for the virtio-blk bug and it potentially compensates for
4ec855
other bugs, too (we know of bugs in the NBD client at least), so leaving
4ec855
the polling loop in, with the new ordering, feels like the safe way for
4ec855
a downstream backport.
4ec855
4ec855
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
4ec855
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
4ec855
---
4ec855
 block.c | 11 +++++------
4ec855
 1 file changed, 5 insertions(+), 6 deletions(-)
4ec855
4ec855
diff --git a/block.c b/block.c
4ec855
index 9d9b8a9..8f3ceea 100644
4ec855
--- a/block.c
4ec855
+++ b/block.c
4ec855
@@ -4989,18 +4989,18 @@ void bdrv_attach_aio_context(BlockDriverState *bs,
4ec855
     bs->walking_aio_notifiers = false;
4ec855
 }
4ec855
 
4ec855
+/* The caller must own the AioContext lock for the old AioContext of bs, but it
4ec855
+ * must not own the AioContext lock for new_context (unless new_context is
4ec855
+ * the same as the current context of bs). */
4ec855
 void bdrv_set_aio_context(BlockDriverState *bs, AioContext *new_context)
4ec855
 {
4ec855
     AioContext *ctx = bdrv_get_aio_context(bs);
4ec855
 
4ec855
-    aio_disable_external(ctx);
4ec855
-    bdrv_parent_drained_begin(bs, NULL, false);
4ec855
-    bdrv_drain(bs); /* ensure there are no in-flight requests */
4ec855
-
4ec855
     while (aio_poll(ctx, false)) {
4ec855
         /* wait for all bottom halves to execute */
4ec855
     }
4ec855
 
4ec855
+    bdrv_drained_begin(bs);
4ec855
     bdrv_detach_aio_context(bs);
4ec855
 
4ec855
     /* This function executes in the old AioContext so acquire the new one in
4ec855
@@ -5008,8 +5008,7 @@ void bdrv_set_aio_context(BlockDriverState *bs, AioContext *new_context)
4ec855
      */
4ec855
     aio_context_acquire(new_context);
4ec855
     bdrv_attach_aio_context(bs, new_context);
4ec855
-    bdrv_parent_drained_end(bs, NULL, false);
4ec855
-    aio_enable_external(ctx);
4ec855
+    bdrv_drained_end(bs);
4ec855
     aio_context_release(new_context);
4ec855
 }
4ec855
 
4ec855
-- 
4ec855
1.8.3.1
4ec855