Blob Blame Raw
From 6baaf82a7742a1de9160146b08ba0cc86b3d4e79 Mon Sep 17 00:00:00 2001
From: Paolo Bonzini <>
Date: Wed, 10 Jan 2018 17:02:21 +0100
Subject: [PATCH 2/2] main-loop: Acquire main_context lock around

RH-Author: Paolo Bonzini <>
Message-id: <>
Patchwork-id: 78541
O-Subject: [RHEL7.5 qemu-kvm PATCH] main-loop: Acquire main_context lock around os_host_main_loop_wait.
Bugzilla: 1473536
RH-Acked-by: Jeffrey Cody <>
RH-Acked-by: John Snow <>
RH-Acked-by: Miroslav Rezanina <>

Bugzilla: 1473536

Brew build:

When running virt-rescue the serial console hangs from time to time.
Virt-rescue runs an ordinary Linux kernel "appliance", but there is
only a single idle process running inside, so the qemu main loop is
largely idle.  With virt-rescue >= 1.37 you may be able to observe the
hang by doing:

  $ virt-rescue -e ^] --scratch
  ><rescue> while true; do ls -l /usr/bin; done

The hang in virt-rescue can be resolved by pressing a key on the
serial console.

Possibly with the same root cause, we also observed hangs during very
early boot of regular Linux VMs with a serial console.  Those hangs
are extremely rare, but you may be able to observe them by running
this command on baremetal for a sufficiently long time:

  $ while libguestfs-test-tool -t 60 >& /tmp/log ; do echo -n . ; done

(Check in /tmp/log that the failure was caused by a hang during early
boot, and not some other reason)

During investigation of this bug, Paolo Bonzini wrote:

> glib is expecting QEMU to use g_main_context_acquire around accesses to
> GMainContext.  However QEMU is not doing that, instead it is taking its
> own mutex.  So we should add g_main_context_acquire and
> g_main_context_release in the two implementations of
> os_host_main_loop_wait; these should undo the effect of Frediano's
> glib patch.

This patch exactly implements Paolo's suggestion in that paragraph.

This fixes the serial console hang in my testing, across 3 different
physical machines (AMD, Intel Core i7 and Intel Xeon), over many hours
of automated testing.  I wasn't able to reproduce the early boot hangs
(but as noted above, these are extremely rare in any case).

Reported-by: Richard W.M. Jones <>
Tested-by: Richard W.M. Jones <>
Signed-off-by: Richard W.M. Jones <>
Message-Id: <>
[Paolo: this is actually a glib bug: recent glib versions are also
expecting g_main_context_acquire around g_poll---but that is not
documented and probably not even intended].
Signed-off-by: Paolo Bonzini <>
(cherry picked from commit ecbddbb106114f90008024b4e6c3ba1c38d7ca0e)

Signed-off-by: Miroslav Rezanina <>
 main-loop.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/main-loop.c b/main-loop.c
index cf36645..a93d37b 100644
--- a/main-loop.c
+++ b/main-loop.c
@@ -192,9 +192,12 @@ static void glib_pollfds_poll(void)
 static int os_host_main_loop_wait(uint32_t timeout)
+    GMainContext *context = g_main_context_default();
     int ret;
     static int spin_counter;
+    g_main_context_acquire(context);
     /* If the I/O thread is very busy or we are incorrectly busy waiting in
@@ -230,6 +233,9 @@ static int os_host_main_loop_wait(uint32_t timeout)
+    g_main_context_release(context);
     return ret;
@@ -385,12 +391,15 @@ static int os_host_main_loop_wait(uint32_t timeout)
     fd_set rfds, wfds, xfds;
     int nfds;
+    g_main_context_acquire(context);
     /* XXX: need to suppress polling by better using win32 events */
     ret = 0;
     for (pe = first_polling_entry; pe != NULL; pe = pe->next) {
         ret |= pe->func(pe->opaque);
     if (ret != 0) {
+        g_main_context_release(context);
         return ret;
@@ -440,6 +449,8 @@ static int os_host_main_loop_wait(uint32_t timeout)
+    g_main_context_release(context);
     return select_ret || g_poll_ret;