render / rpms / libvirt

Forked from rpms/libvirt 10 months ago
Clone
79b470
From 8521a431d3da3cc360eb8102eda1c0d649f1ecc3 Mon Sep 17 00:00:00 2001
79b470
Message-Id: <8521a431d3da3cc360eb8102eda1c0d649f1ecc3@dist-git>
79b470
From: Michal Privoznik <mprivozn@redhat.com>
79b470
Date: Wed, 7 Oct 2020 18:45:45 +0200
79b470
Subject: [PATCH] numa_conf: Properly check for caches in
79b470
 virDomainNumaDefValidate()
79b470
MIME-Version: 1.0
79b470
Content-Type: text/plain; charset=UTF-8
79b470
Content-Transfer-Encoding: 8bit
79b470
79b470
When adding support for HMAT, in f0611fe8830 I've introduced a
79b470
check which aims to validate /domain/cpu/numa/interconnects. As a
79b470
part of that, there is a loop which checks whether all <latency/>
79b470
with @cache attribute refer to an existing cache level. For
79b470
instance:
79b470
79b470
  <cpu mode='host-model' check='partial'>
79b470
    <numa>
79b470
      <cell id='0' cpus='0-5' memory='512000' unit='KiB' discard='yes'>
79b470
        <cache level='1' associativity='direct' policy='writeback'>
79b470
          <size value='8' unit='KiB'/>
79b470
          <line value='5' unit='B'/>
79b470
        </cache>
79b470
      </cell>
79b470
      <interconnects>
79b470
        <latency initiator='0' target='0' cache='1' type='access' value='5'/>
79b470
        <bandwidth initiator='0' target='0' type='access' value='204800' unit='KiB'/>
79b470
      </interconnects>
79b470
    </numa>
79b470
  </cpu>
79b470
79b470
This XML defines that accessing L1 cache of node #0 from node #0
79b470
has latency of 5ns.
79b470
79b470
However, the loop was not written properly. Well, the check in
79b470
it, as it was always checking for the first cache in the target
79b470
node and not the rest. Therefore, the following example errors
79b470
out:
79b470
79b470
  <cpu mode='host-model' check='partial'>
79b470
    <numa>
79b470
      <cell id='0' cpus='0-5' memory='512000' unit='KiB' discard='yes'>
79b470
        <cache level='3' associativity='direct' policy='writeback'>
79b470
          <size value='10' unit='KiB'/>
79b470
          <line value='8' unit='B'/>
79b470
        </cache>
79b470
        <cache level='1' associativity='direct' policy='writeback'>
79b470
          <size value='8' unit='KiB'/>
79b470
          <line value='5' unit='B'/>
79b470
        </cache>
79b470
      </cell>
79b470
      <interconnects>
79b470
        <latency initiator='0' target='0' cache='1' type='access' value='5'/>
79b470
        <bandwidth initiator='0' target='0' type='access' value='204800' unit='KiB'/>
79b470
      </interconnects>
79b470
    </numa>
79b470
  </cpu>
79b470
79b470
This errors out even though it is a valid configuration. The L1
79b470
cache under node #0 is still present.
79b470
79b470
Fixes: f0611fe8830
79b470
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
79b470
Reviewed-by: Laine Stump <laine@redhat.com>
79b470
(cherry picked from commit e41ac71fca309b50e2c8e6ec142d8fe1280ca2ad)
79b470
79b470
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1749518
79b470
79b470
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
79b470
Message-Id: <4bb47f9e97ca097cee1259449da4739b55753751.1602087923.git.mprivozn@redhat.com>
79b470
Reviewed-by: Ján Tomko <jtomko@redhat.com>
79b470
---
79b470
 src/conf/numa_conf.c | 2 +-
79b470
 1 file changed, 1 insertion(+), 1 deletion(-)
79b470
79b470
diff --git a/src/conf/numa_conf.c b/src/conf/numa_conf.c
79b470
index 5a92eb35cc..a20398714e 100644
79b470
--- a/src/conf/numa_conf.c
79b470
+++ b/src/conf/numa_conf.c
79b470
@@ -1423,7 +1423,7 @@ virDomainNumaDefValidate(const virDomainNuma *def)
79b470
 
79b470
         if (l->cache > 0) {
79b470
             for (j = 0; j < def->mem_nodes[l->target].ncaches; j++) {
79b470
-                const virDomainNumaCache *cache = def->mem_nodes[l->target].caches;
79b470
+                const virDomainNumaCache *cache = &def->mem_nodes[l->target].caches[j];
79b470
 
79b470
                 if (l->cache == cache->level)
79b470
                     break;
79b470
-- 
79b470
2.29.2
79b470