yeahuh / rpms / qemu-kvm

Forked from rpms/qemu-kvm 2 years ago
Clone

Blame SOURCES/kvm-block-Remove-aio_poll-in-bdrv_drain_poll-variants.patch

26ba25
From 30bdfc5373eab96cb1f3d62ab90b07becd885272 Mon Sep 17 00:00:00 2001
26ba25
From: Kevin Wolf <kwolf@redhat.com>
26ba25
Date: Wed, 10 Oct 2018 20:22:07 +0100
26ba25
Subject: [PATCH 41/49] block: Remove aio_poll() in bdrv_drain_poll variants
26ba25
26ba25
RH-Author: Kevin Wolf <kwolf@redhat.com>
26ba25
Message-id: <20181010202213.7372-29-kwolf@redhat.com>
26ba25
Patchwork-id: 82619
26ba25
O-Subject: [RHEL-8 qemu-kvm PATCH 38/44] block: Remove aio_poll() in bdrv_drain_poll variants
26ba25
Bugzilla: 1637976
26ba25
RH-Acked-by: Max Reitz <mreitz@redhat.com>
26ba25
RH-Acked-by: John Snow <jsnow@redhat.com>
26ba25
RH-Acked-by: Thomas Huth <thuth@redhat.com>
26ba25
26ba25
bdrv_drain_poll_top_level() was buggy because it didn't release the
26ba25
AioContext lock of the node to be drained before calling aio_poll().
26ba25
This way, callbacks called by aio_poll() would possibly take the lock a
26ba25
second time and run into a deadlock with a nested AIO_WAIT_WHILE() call.
26ba25
26ba25
However, it turns out that the aio_poll() call isn't actually needed any
26ba25
more. It was introduced in commit 91af091f923, which is effectively
26ba25
reverted by this patch. The cases it was supposed to fix are now covered
26ba25
by bdrv_drain_poll(), which waits for block jobs to reach a quiescent
26ba25
state.
26ba25
26ba25
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
26ba25
Reviewed-by: Fam Zheng <famz@redhat.com>
26ba25
Reviewed-by: Max Reitz <mreitz@redhat.com>
26ba25
(cherry picked from commit 4cf077b59fc73eec29f8b7d082919dbb278bdc86)
26ba25
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
26ba25
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
26ba25
---
26ba25
 block/io.c | 8 --------
26ba25
 1 file changed, 8 deletions(-)
26ba25
26ba25
diff --git a/block/io.c b/block/io.c
26ba25
index 19db35e..3313958 100644
26ba25
--- a/block/io.c
26ba25
+++ b/block/io.c
26ba25
@@ -266,10 +266,6 @@ bool bdrv_drain_poll(BlockDriverState *bs, bool recursive,
26ba25
 static bool bdrv_drain_poll_top_level(BlockDriverState *bs, bool recursive,
26ba25
                                       BdrvChild *ignore_parent)
26ba25
 {
26ba25
-    /* Execute pending BHs first and check everything else only after the BHs
26ba25
-     * have executed. */
26ba25
-    while (aio_poll(bs->aio_context, false));
26ba25
-
26ba25
     return bdrv_drain_poll(bs, recursive, ignore_parent, false);
26ba25
 }
26ba25
 
26ba25
@@ -509,10 +505,6 @@ static bool bdrv_drain_all_poll(void)
26ba25
     BlockDriverState *bs = NULL;
26ba25
     bool result = false;
26ba25
 
26ba25
-    /* Execute pending BHs first (may modify the graph) and check everything
26ba25
-     * else only after the BHs have executed. */
26ba25
-    while (aio_poll(qemu_get_aio_context(), false));
26ba25
-
26ba25
     /* bdrv_drain_poll() can't make changes to the graph and we are holding the
26ba25
      * main AioContext lock, so iterating bdrv_next_all_states() is safe. */
26ba25
     while ((bs = bdrv_next_all_states(bs))) {
26ba25
-- 
26ba25
1.8.3.1
26ba25