From e145e366df21d291ba3cbcf2b4982598637bcc01 Mon Sep 17 00:00:00 2001 From: Laurent Vivier Date: Wed, 2 Jan 2019 11:29:47 +0100 Subject: [PATCH 1/8] spapr: Fix ibm, max-associativity-domains property number of nodes RH-Author: Laurent Vivier Message-id: <20190102112948.18536-2-lvivier@redhat.com> Patchwork-id: 83820 O-Subject: [RHEL-7.6 qemu-kvm-rhev PATCH 1/2] spapr: Fix ibm, max-associativity-domains property number of nodes Bugzilla: 1626347 RH-Acked-by: Thomas Huth RH-Acked-by: Serhii Popovych RH-Acked-by: David Gibson From: Serhii Popovych BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1626347 Laurent Vivier reported off by one with maximum number of NUMA nodes provided by qemu-kvm being less by one than required according to description of "ibm,max-associativity-domains" property in LoPAPR. It appears that I incorrectly treated LoPAPR description of this property assuming it provides last valid domain (NUMA node here) instead of maximum number of domains. ### Before hot-add (qemu) info numa 3 nodes node 0 cpus: 0 node 0 size: 0 MB node 0 plugged: 0 MB node 1 cpus: node 1 size: 1024 MB node 1 plugged: 0 MB node 2 cpus: node 2 size: 0 MB node 2 plugged: 0 MB $ numactl -H available: 2 nodes (0-1) node 0 cpus: 0 node 0 size: 0 MB node 0 free: 0 MB node 1 cpus: node 1 size: 999 MB node 1 free: 658 MB node distances: node 0 1 0: 10 40 1: 40 10 ### Hot-add (qemu) object_add memory-backend-ram,id=mem0,size=1G (qemu) device_add pc-dimm,id=dimm1,memdev=mem0,node=2 (qemu) [ 87.704898] pseries-hotplug-mem: Attempting to hot-add 4 ... [ 87.705128] lpar: Attempting to resize HPT to shift 21 ... ### After hot-add (qemu) info numa 3 nodes node 0 cpus: 0 node 0 size: 0 MB node 0 plugged: 0 MB node 1 cpus: node 1 size: 1024 MB node 1 plugged: 0 MB node 2 cpus: node 2 size: 1024 MB node 2 plugged: 1024 MB $ numactl -H available: 2 nodes (0-1) ^^^^^^^^^^^^^^^^^^^^^^^^ Still only two nodes (and memory hot-added to node 0 below) node 0 cpus: 0 node 0 size: 1024 MB node 0 free: 1021 MB node 1 cpus: node 1 size: 999 MB node 1 free: 658 MB node distances: node 0 1 0: 10 40 1: 40 10 After fix applied numactl(8) reports 3 nodes available and memory plugged into node 2 as expected. >From David Gibson: ------------------ Qemu makes a distinction between "non NUMA" (nb_numa_nodes == 0) and "NUMA with one node" (nb_numa_nodes == 1). But from a PAPR guests's point of view these are equivalent. I don't want to present two different cases to the guest when we don't need to, so even though the guest can handle it, I'd prefer we put a '1' here for both the nb_numa_nodes == 0 and nb_numa_nodes == 1 case. This consolidates everything discussed previously on mailing list. Fixes: da9f80fbad21 ("spapr: Add ibm,max-associativity-domains property") Reported-by: Laurent Vivier Signed-off-by: Serhii Popovych Signed-off-by: David Gibson Reviewed-by: Greg Kurz Reviewed-by: Laurent Vivier (cherry picked from commit 3908a24fcb83913079d315de0ca6d598e8616dbb) Signed-off-by: Laurent Vivier Signed-off-by: Miroslav Rezanina --- hw/ppc/spapr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c index 5f26aea..b49f377 100644 --- a/hw/ppc/spapr.c +++ b/hw/ppc/spapr.c @@ -915,7 +915,7 @@ static void spapr_dt_rtas(sPAPRMachineState *spapr, void *fdt) cpu_to_be32(0), cpu_to_be32(0), cpu_to_be32(0), - cpu_to_be32(nb_numa_nodes ? nb_numa_nodes - 1 : 0), + cpu_to_be32(nb_numa_nodes ? nb_numa_nodes : 1), }; _FDT(rtas = fdt_add_subnode(fdt, 0, "rtas")); -- 1.8.3.1