cryptospore / rpms / qemu-kvm

Forked from rpms/qemu-kvm 2 years ago
Clone
Blob Blame History Raw
From bdd720aafc6099e126b682f70e4163e1b2097594 Mon Sep 17 00:00:00 2001
From: Alex Williamson <alex.williamson@redhat.com>
Date: Thu, 7 Aug 2014 21:03:14 +0200
Subject: [PATCH 06/10] vfio: Fix MSI-X vector expansion

Message-id: <20140807210314.11689.89693.stgit@gimli.home>
Patchwork-id: 60482
O-Subject: [RHEL7.0/z qemu-kvm PATCH v2 5/6] vfio: Fix MSI-X vector expansion
Bugzilla: 1110693 1110695
RH-Acked-by: Bandan Das <bsd@redhat.com>
RH-Acked-by: Amos Kong <akong@redhat.com>
RH-Acked-by: Laszlo Ersek <lersek@redhat.com>

Upstream: c048be5cc92ae201c339d46984476c4629275ed6

When new MSI-X vectors are enabled we need to disable MSI-X and
re-enable it with the correct number of vectors.  That means we need
to reprogram the eventfd triggers for each vector.  Prior to f4d45d47
vector->use tracked whether a vector was masked or unmasked and we
could always pick the KVM path when available for unmasked vectors.
Now vfio doesn't track mask state itself and vector->use and virq
remains configured even for masked vectors.  Therefore we need to ask
the MSI-X code whether a vector is masked in order to select the
correct signaling path.  As noted in the comment, MSI relies on
hardware to handle masking.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-stable@nongnu.org # QEMU 2.1
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
---
 hw/misc/vfio.c |   38 +++++++++++++++++++++++++++++---------
 1 files changed, 29 insertions(+), 9 deletions(-)

diff --git a/hw/misc/vfio.c b/hw/misc/vfio.c
index bd37924..688e2ef 100644
--- a/hw/misc/vfio.c
+++ b/hw/misc/vfio.c
@@ -119,11 +119,20 @@ typedef struct VFIOINTx {
 } VFIOINTx;
 
 typedef struct VFIOMSIVector {
-    EventNotifier interrupt; /* eventfd triggered on interrupt */
-    EventNotifier kvm_interrupt; /* eventfd triggered for KVM irqfd bypass */
+    /*
+     * Two interrupt paths are configured per vector.  The first, is only used
+     * for interrupts injected via QEMU.  This is typically the non-accel path,
+     * but may also be used when we want QEMU to handle masking and pending
+     * bits.  The KVM path bypasses QEMU and is therefore higher performance,
+     * but requires masking at the device.  virq is used to track the MSI route
+     * through KVM, thus kvm_interrupt is only available when virq is set to a
+     * valid (>= 0) value.
+     */
+    EventNotifier interrupt;
+    EventNotifier kvm_interrupt;
     struct VFIODevice *vdev; /* back pointer to device */
     MSIMessage msg; /* cache the MSI message so we know when it changes */
-    int virq; /* KVM irqchip route for QEMU bypass */
+    int virq;
     bool use;
 } VFIOMSIVector;
 
@@ -662,13 +671,24 @@ static int vfio_enable_vectors(VFIODevice *vdev, bool msix)
     fds = (int32_t *)&irq_set->data;
 
     for (i = 0; i < vdev->nr_vectors; i++) {
-        if (!vdev->msi_vectors[i].use) {
-            fds[i] = -1;
-        } else if (vdev->msi_vectors[i].virq >= 0) {
-            fds[i] = event_notifier_get_fd(&vdev->msi_vectors[i].kvm_interrupt);
-        } else {
-            fds[i] = event_notifier_get_fd(&vdev->msi_vectors[i].interrupt);
+        int fd = -1;
+
+        /*
+         * MSI vs MSI-X - The guest has direct access to MSI mask and pending
+         * bits, therefore we always use the KVM signaling path when setup.
+         * MSI-X mask and pending bits are emulated, so we want to use the
+         * KVM signaling path only when configured and unmasked.
+         */
+        if (vdev->msi_vectors[i].use) {
+            if (vdev->msi_vectors[i].virq < 0 ||
+                (msix && msix_is_masked(&vdev->pdev, i))) {
+                fd = event_notifier_get_fd(&vdev->msi_vectors[i].interrupt);
+            } else {
+                fd = event_notifier_get_fd(&vdev->msi_vectors[i].kvm_interrupt);
+            }
         }
+
+        fds[i] = fd;
     }
 
     ret = ioctl(vdev->fd, VFIO_DEVICE_SET_IRQS, irq_set);
-- 
1.7.1