yeahuh / rpms / qemu-kvm

Forked from rpms/qemu-kvm 2 years ago
Clone
9ae3a8
From 45958d24d6c9bcdc333844f69f47cacec29cfa0e Mon Sep 17 00:00:00 2001
9ae3a8
From: Max Reitz <mreitz@redhat.com>
9ae3a8
Date: Mon, 4 Nov 2013 22:32:26 +0100
9ae3a8
Subject: [PATCH 33/87] qcow2: Switch L1 table in a single sequence
9ae3a8
9ae3a8
RH-Author: Max Reitz <mreitz@redhat.com>
9ae3a8
Message-id: <1383604354-12743-36-git-send-email-mreitz@redhat.com>
9ae3a8
Patchwork-id: 55335
9ae3a8
O-Subject: [RHEL-7.0 qemu-kvm PATCH 35/43] qcow2: Switch L1 table in a single sequence
9ae3a8
Bugzilla: 1004347
9ae3a8
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
9ae3a8
RH-Acked-by: Laszlo Ersek <lersek@redhat.com>
9ae3a8
RH-Acked-by: Fam Zheng <famz@redhat.com>
9ae3a8
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
9ae3a8
9ae3a8
BZ: 1004347
9ae3a8
9ae3a8
Switching the L1 table in memory should be an atomic operation, as far
9ae3a8
as possible. Calling qcow2_free_clusters on the old L1 table on disk is
9ae3a8
not a good idea when the old L1 table is no longer valid and the address
9ae3a8
to the new one hasn't yet been written into the corresponding
9ae3a8
BDRVQcowState field. To be more specific, this can lead to segfaults due
9ae3a8
to qcow2_check_metadata_overlap trying to access the L1 table during the
9ae3a8
free operation.
9ae3a8
9ae3a8
Signed-off-by: Max Reitz <mreitz@redhat.com>
9ae3a8
Reviewed-by: Eric Blake <eblake@redhat.com>
9ae3a8
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
9ae3a8
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
9ae3a8
(cherry picked from commit fda74f826baec78d685e5a87fd8a95bfb7bb2243)
9ae3a8
9ae3a8
Signed-off-by: Max Reitz <mreitz@redhat.com>
9ae3a8
---
9ae3a8
 block/qcow2-cluster.c | 7 +++++--
9ae3a8
 1 file changed, 5 insertions(+), 2 deletions(-)
9ae3a8
9ae3a8
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
9ae3a8
---
9ae3a8
 block/qcow2-cluster.c |    7 +++++--
9ae3a8
 1 files changed, 5 insertions(+), 2 deletions(-)
9ae3a8
9ae3a8
diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c
9ae3a8
index 2d5aa92..c05f182 100644
9ae3a8
--- a/block/qcow2-cluster.c
9ae3a8
+++ b/block/qcow2-cluster.c
9ae3a8
@@ -35,6 +35,7 @@ int qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size,
9ae3a8
     BDRVQcowState *s = bs->opaque;
9ae3a8
     int new_l1_size2, ret, i;
9ae3a8
     uint64_t *new_l1_table;
9ae3a8
+    int64_t old_l1_table_offset, old_l1_size;
9ae3a8
     int64_t new_l1_table_offset, new_l1_size;
9ae3a8
     uint8_t data[12];
9ae3a8
 
9ae3a8
@@ -106,11 +107,13 @@ int qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size,
9ae3a8
         goto fail;
9ae3a8
     }
9ae3a8
     g_free(s->l1_table);
9ae3a8
-    qcow2_free_clusters(bs, s->l1_table_offset, s->l1_size * sizeof(uint64_t),
9ae3a8
-                        QCOW2_DISCARD_OTHER);
9ae3a8
+    old_l1_table_offset = s->l1_table_offset;
9ae3a8
     s->l1_table_offset = new_l1_table_offset;
9ae3a8
     s->l1_table = new_l1_table;
9ae3a8
+    old_l1_size = s->l1_size;
9ae3a8
     s->l1_size = new_l1_size;
9ae3a8
+    qcow2_free_clusters(bs, old_l1_table_offset, old_l1_size * sizeof(uint64_t),
9ae3a8
+                        QCOW2_DISCARD_OTHER);
9ae3a8
     return 0;
9ae3a8
  fail:
9ae3a8
     g_free(new_l1_table);
9ae3a8
-- 
9ae3a8
1.7.1
9ae3a8