thebeanogamer / rpms / qemu-kvm

Forked from rpms/qemu-kvm 5 months ago
Clone

Blame SOURCES/kvm-block-Remove-aio_poll-in-bdrv_drain_poll-variants.patch

ae23c9
From 30bdfc5373eab96cb1f3d62ab90b07becd885272 Mon Sep 17 00:00:00 2001
ae23c9
From: Kevin Wolf <kwolf@redhat.com>
ae23c9
Date: Wed, 10 Oct 2018 20:22:07 +0100
ae23c9
Subject: [PATCH 41/49] block: Remove aio_poll() in bdrv_drain_poll variants
ae23c9
ae23c9
RH-Author: Kevin Wolf <kwolf@redhat.com>
ae23c9
Message-id: <20181010202213.7372-29-kwolf@redhat.com>
ae23c9
Patchwork-id: 82619
ae23c9
O-Subject: [RHEL-8 qemu-kvm PATCH 38/44] block: Remove aio_poll() in bdrv_drain_poll variants
ae23c9
Bugzilla: 1637976
ae23c9
RH-Acked-by: Max Reitz <mreitz@redhat.com>
ae23c9
RH-Acked-by: John Snow <jsnow@redhat.com>
ae23c9
RH-Acked-by: Thomas Huth <thuth@redhat.com>
ae23c9
ae23c9
bdrv_drain_poll_top_level() was buggy because it didn't release the
ae23c9
AioContext lock of the node to be drained before calling aio_poll().
ae23c9
This way, callbacks called by aio_poll() would possibly take the lock a
ae23c9
second time and run into a deadlock with a nested AIO_WAIT_WHILE() call.
ae23c9
ae23c9
However, it turns out that the aio_poll() call isn't actually needed any
ae23c9
more. It was introduced in commit 91af091f923, which is effectively
ae23c9
reverted by this patch. The cases it was supposed to fix are now covered
ae23c9
by bdrv_drain_poll(), which waits for block jobs to reach a quiescent
ae23c9
state.
ae23c9
ae23c9
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
ae23c9
Reviewed-by: Fam Zheng <famz@redhat.com>
ae23c9
Reviewed-by: Max Reitz <mreitz@redhat.com>
ae23c9
(cherry picked from commit 4cf077b59fc73eec29f8b7d082919dbb278bdc86)
ae23c9
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
ae23c9
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
ae23c9
---
ae23c9
 block/io.c | 8 --------
ae23c9
 1 file changed, 8 deletions(-)
ae23c9
ae23c9
diff --git a/block/io.c b/block/io.c
ae23c9
index 19db35e..3313958 100644
ae23c9
--- a/block/io.c
ae23c9
+++ b/block/io.c
ae23c9
@@ -266,10 +266,6 @@ bool bdrv_drain_poll(BlockDriverState *bs, bool recursive,
ae23c9
 static bool bdrv_drain_poll_top_level(BlockDriverState *bs, bool recursive,
ae23c9
                                       BdrvChild *ignore_parent)
ae23c9
 {
ae23c9
-    /* Execute pending BHs first and check everything else only after the BHs
ae23c9
-     * have executed. */
ae23c9
-    while (aio_poll(bs->aio_context, false));
ae23c9
-
ae23c9
     return bdrv_drain_poll(bs, recursive, ignore_parent, false);
ae23c9
 }
ae23c9
 
ae23c9
@@ -509,10 +505,6 @@ static bool bdrv_drain_all_poll(void)
ae23c9
     BlockDriverState *bs = NULL;
ae23c9
     bool result = false;
ae23c9
 
ae23c9
-    /* Execute pending BHs first (may modify the graph) and check everything
ae23c9
-     * else only after the BHs have executed. */
ae23c9
-    while (aio_poll(qemu_get_aio_context(), false));
ae23c9
-
ae23c9
     /* bdrv_drain_poll() can't make changes to the graph and we are holding the
ae23c9
      * main AioContext lock, so iterating bdrv_next_all_states() is safe. */
ae23c9
     while ((bs = bdrv_next_all_states(bs))) {
ae23c9
-- 
ae23c9
1.8.3.1
ae23c9