Blame SOURCES/kvm-virtio-net-don-t-handle-mq-request-in-userspace-hand.patch

586cba
From 521a1953bc11ab6823dcbbee773bcf86e926a9e7 Mon Sep 17 00:00:00 2001
24c150
From: Si-Wei Liu <si-wei.liu@oracle.com>
24c150
Date: Fri, 6 May 2022 19:28:18 -0700
586cba
Subject: [PATCH 14/16] virtio-net: don't handle mq request in userspace
586cba
 handler for vhost-vdpa
24c150
MIME-Version: 1.0
24c150
Content-Type: text/plain; charset=UTF-8
24c150
Content-Transfer-Encoding: 8bit
24c150
586cba
RH-Author: Jason Wang <jasowang@redhat.com>
586cba
RH-MergeRequest: 98: Multiqueue fixes for vhost-vDPA
586cba
RH-Commit: [7/7] 9781cab45448ae16a00fbf10cf7995df6b984a0a (jasowang/qemu-kvm-cs)
586cba
RH-Bugzilla: 2070804
24c150
RH-Acked-by: Eugenio PĂ©rez <eperezma@redhat.com>
586cba
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
586cba
RH-Acked-by: Cindy Lu <lulu@redhat.com>
24c150
24c150
virtio_queue_host_notifier_read() tends to read pending event
24c150
left behind on ioeventfd in the vhost_net_stop() path, and
24c150
attempts to handle outstanding kicks from userspace vq handler.
24c150
However, in the ctrl_vq handler, virtio_net_handle_mq() has a
24c150
recursive call into virtio_net_set_status(), which may lead to
24c150
segmentation fault as shown in below stack trace:
24c150
24c150
0  0x000055f800df1780 in qdev_get_parent_bus (dev=0x0) at ../hw/core/qdev.c:376
24c150
1  0x000055f800c68ad8 in virtio_bus_device_iommu_enabled (vdev=vdev@entry=0x0) at ../hw/virtio/virtio-bus.c:331
24c150
2  0x000055f800d70d7f in vhost_memory_unmap (dev=<optimized out>) at ../hw/virtio/vhost.c:318
24c150
3  0x000055f800d70d7f in vhost_memory_unmap (dev=<optimized out>, buffer=0x7fc19bec5240, len=2052, is_write=1, access_len=2052) at ../hw/virtio/vhost.c:336
24c150
4  0x000055f800d71867 in vhost_virtqueue_stop (dev=dev@entry=0x55f8037ccc30, vdev=vdev@entry=0x55f8044ec590, vq=0x55f8037cceb0, idx=0) at ../hw/virtio/vhost.c:1241
24c150
5  0x000055f800d7406c in vhost_dev_stop (hdev=hdev@entry=0x55f8037ccc30, vdev=vdev@entry=0x55f8044ec590) at ../hw/virtio/vhost.c:1839
24c150
6  0x000055f800bf00a7 in vhost_net_stop_one (net=0x55f8037ccc30, dev=0x55f8044ec590) at ../hw/net/vhost_net.c:315
24c150
7  0x000055f800bf0678 in vhost_net_stop (dev=dev@entry=0x55f8044ec590, ncs=0x55f80452bae0, data_queue_pairs=data_queue_pairs@entry=7, cvq=cvq@entry=1)
24c150
   at ../hw/net/vhost_net.c:423
24c150
8  0x000055f800d4e628 in virtio_net_set_status (status=<optimized out>, n=0x55f8044ec590) at ../hw/net/virtio-net.c:296
24c150
9  0x000055f800d4e628 in virtio_net_set_status (vdev=vdev@entry=0x55f8044ec590, status=15 '\017') at ../hw/net/virtio-net.c:370
24c150
10 0x000055f800d534d8 in virtio_net_handle_ctrl (iov_cnt=<optimized out>, iov=<optimized out>, cmd=0 '\000', n=0x55f8044ec590) at ../hw/net/virtio-net.c:1408
24c150
11 0x000055f800d534d8 in virtio_net_handle_ctrl (vdev=0x55f8044ec590, vq=0x7fc1a7e888d0) at ../hw/net/virtio-net.c:1452
24c150
12 0x000055f800d69f37 in virtio_queue_host_notifier_read (vq=0x7fc1a7e888d0) at ../hw/virtio/virtio.c:2331
24c150
13 0x000055f800d69f37 in virtio_queue_host_notifier_read (n=n@entry=0x7fc1a7e8894c) at ../hw/virtio/virtio.c:3575
24c150
14 0x000055f800c688e6 in virtio_bus_cleanup_host_notifier (bus=<optimized out>, n=n@entry=14) at ../hw/virtio/virtio-bus.c:312
24c150
15 0x000055f800d73106 in vhost_dev_disable_notifiers (hdev=hdev@entry=0x55f8035b51b0, vdev=vdev@entry=0x55f8044ec590)
24c150
   at ../../../include/hw/virtio/virtio-bus.h:35
24c150
16 0x000055f800bf00b2 in vhost_net_stop_one (net=0x55f8035b51b0, dev=0x55f8044ec590) at ../hw/net/vhost_net.c:316
24c150
17 0x000055f800bf0678 in vhost_net_stop (dev=dev@entry=0x55f8044ec590, ncs=0x55f80452bae0, data_queue_pairs=data_queue_pairs@entry=7, cvq=cvq@entry=1)
24c150
   at ../hw/net/vhost_net.c:423
24c150
18 0x000055f800d4e628 in virtio_net_set_status (status=<optimized out>, n=0x55f8044ec590) at ../hw/net/virtio-net.c:296
24c150
19 0x000055f800d4e628 in virtio_net_set_status (vdev=0x55f8044ec590, status=15 '\017') at ../hw/net/virtio-net.c:370
24c150
20 0x000055f800d6c4b2 in virtio_set_status (vdev=0x55f8044ec590, val=<optimized out>) at ../hw/virtio/virtio.c:1945
24c150
21 0x000055f800d11d9d in vm_state_notify (running=running@entry=false, state=state@entry=RUN_STATE_SHUTDOWN) at ../softmmu/runstate.c:333
24c150
22 0x000055f800d04e7a in do_vm_stop (state=state@entry=RUN_STATE_SHUTDOWN, send_stop=send_stop@entry=false) at ../softmmu/cpus.c:262
24c150
23 0x000055f800d04e99 in vm_shutdown () at ../softmmu/cpus.c:280
24c150
24 0x000055f800d126af in qemu_cleanup () at ../softmmu/runstate.c:812
24c150
25 0x000055f800ad5b13 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at ../softmmu/main.c:51
24c150
24c150
For now, temporarily disable handling MQ request from the ctrl_vq
24c150
userspace hanlder to avoid the recursive virtio_net_set_status()
24c150
call. Some rework is needed to allow changing the number of
24c150
queues without going through a full virtio_net_set_status cycle,
24c150
particularly for vhost-vdpa backend.
24c150
24c150
This patch will need to be reverted as soon as future patches of
24c150
having the change of #queues handled in userspace is merged.
24c150
24c150
Fixes: 402378407db ("vhost-vdpa: multiqueue support")
24c150
Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
24c150
Acked-by: Jason Wang <jasowang@redhat.com>
24c150
Message-Id: <1651890498-24478-8-git-send-email-si-wei.liu@oracle.com>
24c150
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
24c150
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
24c150
(cherry picked from commit 2a7888cc3aa31faee839fa5dddad354ff8941f4c)
586cba
Signed-off-by: Jason Wang <jasowang@redhat.com>
24c150
---
24c150
 hw/net/virtio-net.c | 13 +++++++++++++
24c150
 1 file changed, 13 insertions(+)
24c150
24c150
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
586cba
index f0bb29c741..099e65036d 100644
24c150
--- a/hw/net/virtio-net.c
24c150
+++ b/hw/net/virtio-net.c
586cba
@@ -1381,6 +1381,7 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
24c150
 {
24c150
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
24c150
     uint16_t queue_pairs;
24c150
+    NetClientState *nc = qemu_get_queue(n->nic);
24c150
 
24c150
     virtio_net_disable_rss(n);
24c150
     if (cmd == VIRTIO_NET_CTRL_MQ_HASH_CONFIG) {
586cba
@@ -1412,6 +1413,18 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
24c150
         return VIRTIO_NET_ERR;
24c150
     }
24c150
 
24c150
+    /* Avoid changing the number of queue_pairs for vdpa device in
24c150
+     * userspace handler. A future fix is needed to handle the mq
24c150
+     * change in userspace handler with vhost-vdpa. Let's disable
24c150
+     * the mq handling from userspace for now and only allow get
24c150
+     * done through the kernel. Ripples may be seen when falling
24c150
+     * back to userspace, but without doing it qemu process would
24c150
+     * crash on a recursive entry to virtio_net_set_status().
24c150
+     */
24c150
+    if (nc->peer && nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
24c150
+        return VIRTIO_NET_ERR;
24c150
+    }
24c150
+
24c150
     n->curr_queue_pairs = queue_pairs;
24c150
     /* stop the backend before changing the number of queue_pairs to avoid handling a
24c150
      * disabled queue */
24c150
-- 
24c150
2.31.1
24c150