render / rpms / libvirt

Forked from rpms/libvirt 10 months ago
Clone
397dc2
From 8521a431d3da3cc360eb8102eda1c0d649f1ecc3 Mon Sep 17 00:00:00 2001
397dc2
Message-Id: <8521a431d3da3cc360eb8102eda1c0d649f1ecc3@dist-git>
397dc2
From: Michal Privoznik <mprivozn@redhat.com>
397dc2
Date: Wed, 7 Oct 2020 18:45:45 +0200
397dc2
Subject: [PATCH] numa_conf: Properly check for caches in
397dc2
 virDomainNumaDefValidate()
397dc2
MIME-Version: 1.0
397dc2
Content-Type: text/plain; charset=UTF-8
397dc2
Content-Transfer-Encoding: 8bit
397dc2
397dc2
When adding support for HMAT, in f0611fe8830 I've introduced a
397dc2
check which aims to validate /domain/cpu/numa/interconnects. As a
397dc2
part of that, there is a loop which checks whether all <latency/>
397dc2
with @cache attribute refer to an existing cache level. For
397dc2
instance:
397dc2
397dc2
  <cpu mode='host-model' check='partial'>
397dc2
    <numa>
397dc2
      <cell id='0' cpus='0-5' memory='512000' unit='KiB' discard='yes'>
397dc2
        <cache level='1' associativity='direct' policy='writeback'>
397dc2
          <size value='8' unit='KiB'/>
397dc2
          <line value='5' unit='B'/>
397dc2
        </cache>
397dc2
      </cell>
397dc2
      <interconnects>
397dc2
        <latency initiator='0' target='0' cache='1' type='access' value='5'/>
397dc2
        <bandwidth initiator='0' target='0' type='access' value='204800' unit='KiB'/>
397dc2
      </interconnects>
397dc2
    </numa>
397dc2
  </cpu>
397dc2
397dc2
This XML defines that accessing L1 cache of node #0 from node #0
397dc2
has latency of 5ns.
397dc2
397dc2
However, the loop was not written properly. Well, the check in
397dc2
it, as it was always checking for the first cache in the target
397dc2
node and not the rest. Therefore, the following example errors
397dc2
out:
397dc2
397dc2
  <cpu mode='host-model' check='partial'>
397dc2
    <numa>
397dc2
      <cell id='0' cpus='0-5' memory='512000' unit='KiB' discard='yes'>
397dc2
        <cache level='3' associativity='direct' policy='writeback'>
397dc2
          <size value='10' unit='KiB'/>
397dc2
          <line value='8' unit='B'/>
397dc2
        </cache>
397dc2
        <cache level='1' associativity='direct' policy='writeback'>
397dc2
          <size value='8' unit='KiB'/>
397dc2
          <line value='5' unit='B'/>
397dc2
        </cache>
397dc2
      </cell>
397dc2
      <interconnects>
397dc2
        <latency initiator='0' target='0' cache='1' type='access' value='5'/>
397dc2
        <bandwidth initiator='0' target='0' type='access' value='204800' unit='KiB'/>
397dc2
      </interconnects>
397dc2
    </numa>
397dc2
  </cpu>
397dc2
397dc2
This errors out even though it is a valid configuration. The L1
397dc2
cache under node #0 is still present.
397dc2
397dc2
Fixes: f0611fe8830
397dc2
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
397dc2
Reviewed-by: Laine Stump <laine@redhat.com>
397dc2
(cherry picked from commit e41ac71fca309b50e2c8e6ec142d8fe1280ca2ad)
397dc2
397dc2
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1749518
397dc2
397dc2
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
397dc2
Message-Id: <4bb47f9e97ca097cee1259449da4739b55753751.1602087923.git.mprivozn@redhat.com>
397dc2
Reviewed-by: Ján Tomko <jtomko@redhat.com>
397dc2
---
397dc2
 src/conf/numa_conf.c | 2 +-
397dc2
 1 file changed, 1 insertion(+), 1 deletion(-)
397dc2
397dc2
diff --git a/src/conf/numa_conf.c b/src/conf/numa_conf.c
397dc2
index 5a92eb35cc..a20398714e 100644
397dc2
--- a/src/conf/numa_conf.c
397dc2
+++ b/src/conf/numa_conf.c
397dc2
@@ -1423,7 +1423,7 @@ virDomainNumaDefValidate(const virDomainNuma *def)
397dc2
 
397dc2
         if (l->cache > 0) {
397dc2
             for (j = 0; j < def->mem_nodes[l->target].ncaches; j++) {
397dc2
-                const virDomainNumaCache *cache = def->mem_nodes[l->target].caches;
397dc2
+                const virDomainNumaCache *cache = &def->mem_nodes[l->target].caches[j];
397dc2
 
397dc2
                 if (l->cache == cache->level)
397dc2
                     break;
397dc2
-- 
397dc2
2.29.2
397dc2