yeahuh / rpms / qemu-kvm

Forked from rpms/qemu-kvm 2 years ago
Clone

Blame SOURCES/kvm-block-Fix-invalidate_cache-error-path-for-parent-act.patch

b38b0f
From 6123c29fcf385010a683061fd7f948f256713b48 Mon Sep 17 00:00:00 2001
b38b0f
From: Kevin Wolf <kwolf@redhat.com>
b38b0f
Date: Fri, 17 May 2019 14:23:15 +0100
b38b0f
Subject: [PATCH 4/5] block: Fix invalidate_cache error path for parent
b38b0f
 activation
b38b0f
b38b0f
RH-Author: Kevin Wolf <kwolf@redhat.com>
b38b0f
Message-id: <20190517142315.16266-2-kwolf@redhat.com>
b38b0f
Patchwork-id: 88024
b38b0f
O-Subject: [RHEL-8.1 qemu-kvm PATCH 1/1] block: Fix invalidate_cache error path for parent activation
b38b0f
Bugzilla: 1673010
b38b0f
RH-Acked-by: John Snow <jsnow@redhat.com>
b38b0f
RH-Acked-by: Sergio Lopez Pascual <slp@redhat.com>
b38b0f
RH-Acked-by: Stefano Garzarella <sgarzare@redhat.com>
b38b0f
b38b0f
bdrv_co_invalidate_cache() clears the BDRV_O_INACTIVE flag before
b38b0f
actually activating a node so that the correct permissions etc. are
b38b0f
taken. In case of errors, the flag must be restored so that the next
b38b0f
call to bdrv_co_invalidate_cache() retries activation.
b38b0f
b38b0f
Restoring the flag was missing in the error path for a failed
b38b0f
parent->role->activate() call. The consequence is that this attempt to
b38b0f
activate all images correctly fails because we still set errp, however
b38b0f
on the next attempt BDRV_O_INACTIVE is already clear, so we return
b38b0f
success without actually retrying the failed action.
b38b0f
b38b0f
An example where this is observable in practice is migration to a QEMU
b38b0f
instance that has a raw format block node attached to a guest device
b38b0f
with share-rw=off (the default) while another process holds
b38b0f
BLK_PERM_WRITE for the same image. In this case, all activation steps
b38b0f
before parent->role->activate() succeed because raw can tolerate other
b38b0f
writers to the image. Only the parent callback (in particular
b38b0f
blk_root_activate()) tries to implement the share-rw=on property and
b38b0f
requests exclusive write permissions. This fails when the migration
b38b0f
completes and correctly displays an error. However, a manual 'cont' will
b38b0f
incorrectly resume the VM without calling blk_root_activate() again.
b38b0f
b38b0f
This case is described in more detail in the following bug report:
b38b0f
https://bugzilla.redhat.com/show_bug.cgi?id=1531888
b38b0f
b38b0f
Fix this by correctly restoring the BDRV_O_INACTIVE flag in the error
b38b0f
path.
b38b0f
b38b0f
Cc: qemu-stable@nongnu.org
b38b0f
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
b38b0f
Tested-by: Markus Armbruster <armbru@redhat.com>
b38b0f
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
b38b0f
(cherry picked from commit 78fc3b3a26c145eebcdee992988644974b243a74)
b38b0f
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
b38b0f
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
b38b0f
---
b38b0f
 block.c | 1 +
b38b0f
 1 file changed, 1 insertion(+)
b38b0f
b38b0f
diff --git a/block.c b/block.c
b38b0f
index d0f0dc6..82b16df 100644
b38b0f
--- a/block.c
b38b0f
+++ b/block.c
b38b0f
@@ -4417,6 +4417,7 @@ static void coroutine_fn bdrv_co_invalidate_cache(BlockDriverState *bs,
b38b0f
         if (parent->role->activate) {
b38b0f
             parent->role->activate(parent, &local_err);
b38b0f
             if (local_err) {
b38b0f
+                bs->open_flags |= BDRV_O_INACTIVE;
b38b0f
                 error_propagate(errp, local_err);
b38b0f
                 return;
b38b0f
             }
b38b0f
-- 
b38b0f
1.8.3.1
b38b0f