yeahuh / rpms / qemu-kvm

Forked from rpms/qemu-kvm 2 years ago
Clone

Blame SOURCES/kvm-migration-multifd-fix-destroyed-mutex-access-in-term.patch

ddf19c
From 2c14a6831954a59256cc8d1980da0ad705a3a3fa Mon Sep 17 00:00:00 2001
ddf19c
From: Juan Quintela <quintela@redhat.com>
ddf19c
Date: Tue, 3 Mar 2020 14:51:37 +0000
ddf19c
Subject: [PATCH 05/18] migration/multifd: fix destroyed mutex access in
ddf19c
 terminating multifd threads
ddf19c
ddf19c
RH-Author: Juan Quintela <quintela@redhat.com>
ddf19c
Message-id: <20200303145143.149290-5-quintela@redhat.com>
ddf19c
Patchwork-id: 94119
ddf19c
O-Subject: [RHEL-AV-8.2.0 qemu-kvm PATCH v2 04/10] migration/multifd: fix destroyed mutex access in terminating multifd threads
ddf19c
Bugzilla: 1738451
ddf19c
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
ddf19c
RH-Acked-by: Peter Xu <peterx@redhat.com>
ddf19c
RH-Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
ddf19c
ddf19c
From: Jiahui Cen <cenjiahui@huawei.com>
ddf19c
ddf19c
One multifd will lock all the other multifds' IOChannel mutex to inform them
ddf19c
to quit by setting p->quit or shutting down p->c. In this senario, if some
ddf19c
multifds had already been terminated and multifd_load_cleanup/multifd_save_cleanup
ddf19c
had destroyed their mutex, it could cause destroyed mutex access when trying
ddf19c
lock their mutex.
ddf19c
ddf19c
Here is the coredump stack:
ddf19c
    #0  0x00007f81a2794437 in raise () from /usr/lib64/libc.so.6
ddf19c
    #1  0x00007f81a2795b28 in abort () from /usr/lib64/libc.so.6
ddf19c
    #2  0x00007f81a278d1b6 in __assert_fail_base () from /usr/lib64/libc.so.6
ddf19c
    #3  0x00007f81a278d262 in __assert_fail () from /usr/lib64/libc.so.6
ddf19c
    #4  0x000055eb1bfadbd3 in qemu_mutex_lock_impl (mutex=0x55eb1e2d1988, file=<optimized out>, line=<optimized out>) at util/qemu-thread-posix.c:64
ddf19c
    #5  0x000055eb1bb4564a in multifd_send_terminate_threads (err=<optimized out>) at migration/ram.c:1015
ddf19c
    #6  0x000055eb1bb4bb7f in multifd_send_thread (opaque=0x55eb1e2d19f8) at migration/ram.c:1171
ddf19c
    #7  0x000055eb1bfad628 in qemu_thread_start (args=0x55eb1e170450) at util/qemu-thread-posix.c:502
ddf19c
    #8  0x00007f81a2b36df5 in start_thread () from /usr/lib64/libpthread.so.0
ddf19c
    #9  0x00007f81a286048d in clone () from /usr/lib64/libc.so.6
ddf19c
ddf19c
To fix it up, let's destroy the mutex after all the other multifd threads had
ddf19c
been terminated.
ddf19c
ddf19c
Signed-off-by: Jiahui Cen <cenjiahui@huawei.com>
ddf19c
Signed-off-by: Ying Fang <fangying1@huawei.com>
ddf19c
Reviewed-by: Juan Quintela <quintela@redhat.com>
ddf19c
Signed-off-by: Juan Quintela <quintela@redhat.com>
ddf19c
(cherry picked from commit 9560a48ecc0c20d87bc458a6db77fba651605819)
ddf19c
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
ddf19c
---
ddf19c
 migration/ram.c | 8 ++++++++
ddf19c
 1 file changed, 8 insertions(+)
ddf19c
ddf19c
diff --git a/migration/ram.c b/migration/ram.c
ddf19c
index 860f781..6c55c5d 100644
ddf19c
--- a/migration/ram.c
ddf19c
+++ b/migration/ram.c
ddf19c
@@ -1052,6 +1052,10 @@ void multifd_save_cleanup(void)
ddf19c
         if (p->running) {
ddf19c
             qemu_thread_join(&p->thread);
ddf19c
         }
ddf19c
+    }
ddf19c
+    for (i = 0; i < migrate_multifd_channels(); i++) {
ddf19c
+        MultiFDSendParams *p = &multifd_send_state->params[i];
ddf19c
+
ddf19c
         socket_send_channel_destroy(p->c);
ddf19c
         p->c = NULL;
ddf19c
         qemu_mutex_destroy(&p->mutex);
ddf19c
@@ -1335,6 +1339,10 @@ int multifd_load_cleanup(Error **errp)
ddf19c
             qemu_sem_post(&p->sem_sync);
ddf19c
             qemu_thread_join(&p->thread);
ddf19c
         }
ddf19c
+    }
ddf19c
+    for (i = 0; i < migrate_multifd_channels(); i++) {
ddf19c
+        MultiFDRecvParams *p = &multifd_recv_state->params[i];
ddf19c
+
ddf19c
         object_unref(OBJECT(p->c));
ddf19c
         p->c = NULL;
ddf19c
         qemu_mutex_destroy(&p->mutex);
ddf19c
-- 
ddf19c
1.8.3.1
ddf19c