render / rpms / libvirt

Forked from rpms/libvirt 9 months ago
Clone
c401cc
From 8d5bef256e0b58bb6f45910d0e9e724da72e100c Mon Sep 17 00:00:00 2001
c401cc
Message-Id: <8d5bef256e0b58bb6f45910d0e9e724da72e100c@dist-git>
c401cc
From: Michal Privoznik <mprivozn@redhat.com>
c401cc
Date: Wed, 26 Feb 2014 10:05:24 +0100
c401cc
Subject: [PATCH] virNetDevVethCreate: Serialize callers
c401cc
c401cc
https://bugzilla.redhat.com/show_bug.cgi?id=1014604
c401cc
c401cc
Consider dozen of LXC domains, each of them having this type of interface:
c401cc
c401cc
    <interface type='network'>
c401cc
      <mac address='52:54:00:a7:05:4b'/>
c401cc
      <source network='default'/>
c401cc
    </interface>
c401cc
c401cc
When starting these domain in parallel, all workers may meet in
c401cc
virNetDevVethCreate() where a race starts. Race over allocating veth
c401cc
pairs because allocation requires two steps:
c401cc
c401cc
  1) find first nonexistent '/sys/class/net/vnet%d/'
c401cc
  2) run 'ip link add ...' command
c401cc
c401cc
Now consider two threads. Both of them find N as the first unused veth
c401cc
index but only one of them succeeds allocating it. The other one fails.
c401cc
For such cases, we are running the allocation in a loop with 10 rounds.
c401cc
However this is very flaky synchronization. It should be rather used
c401cc
when libvirt is competing with other process than when libvirt threads
c401cc
fight each other. Therefore, internally we should use mutex to serialize
c401cc
callers, and do the allocation in loop (just in case we are competing
c401cc
with a different process). By the way we have something similar already
c401cc
since 1cf97c87.
c401cc
c401cc
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
c401cc
(cherry picked from commit c0d162c68c2f19af8d55a435a9e372da33857048)
c401cc
Signed-off-by: Jiri Denemark <jdenemar@redhat.com>
c401cc
---
c401cc
 src/util/virnetdevveth.c | 18 ++++++++++++++++++
c401cc
 1 file changed, 18 insertions(+)
c401cc
c401cc
diff --git a/src/util/virnetdevveth.c b/src/util/virnetdevveth.c
c401cc
index 25eb282..e698ce2 100644
c401cc
--- a/src/util/virnetdevveth.c
c401cc
+++ b/src/util/virnetdevveth.c
c401cc
@@ -39,6 +39,19 @@
c401cc
 
c401cc
 /* Functions */
c401cc
 
c401cc
+virMutex virNetDevVethCreateMutex;
c401cc
+
c401cc
+static int virNetDevVethCreateMutexOnceInit(void)
c401cc
+{
c401cc
+    if (virMutexInit(&virNetDevVethCreateMutex) < 0) {
c401cc
+        virReportSystemError(errno, "%s", _("unable to init mutex"));
c401cc
+        return -1;
c401cc
+    }
c401cc
+    return 0;
c401cc
+}
c401cc
+
c401cc
+VIR_ONCE_GLOBAL_INIT(virNetDevVethCreateMutex);
c401cc
+
c401cc
 static int virNetDevVethExists(int devNum)
c401cc
 {
c401cc
     int ret;
c401cc
@@ -117,6 +130,10 @@ int virNetDevVethCreate(char** veth1, char** veth2)
c401cc
      * We might race with other containers, but this is reasonably
c401cc
      * unlikely, so don't do too many retries for device creation
c401cc
      */
c401cc
+    if (virNetDevVethCreateMutexInitialize() < 0)
c401cc
+        return -1;
c401cc
+
c401cc
+    virMutexLock(&virNetDevVethCreateMutex);
c401cc
 #define MAX_VETH_RETRIES 10
c401cc
 
c401cc
     for (i = 0; i < MAX_VETH_RETRIES; i++) {
c401cc
@@ -179,6 +196,7 @@ int virNetDevVethCreate(char** veth1, char** veth2)
c401cc
                    MAX_VETH_RETRIES);
c401cc
 
c401cc
 cleanup:
c401cc
+    virMutexUnlock(&virNetDevVethCreateMutex);
c401cc
     virCommandFree(cmd);
c401cc
     VIR_FREE(veth1auto);
c401cc
     VIR_FREE(veth2auto);
c401cc
-- 
c401cc
1.9.0
c401cc