naccyde / rpms / systemd

Forked from rpms/systemd a year ago
Clone
be0c12
From dd662fc39a28655b89619a828a15e5e457bf6f4c Mon Sep 17 00:00:00 2001
be0c12
From: Michal Sekletar <msekleta@redhat.com>
be0c12
Date: Thu, 25 Nov 2021 18:28:25 +0100
be0c12
Subject: [PATCH] unit: add jobs that were skipped because of ratelimit back to
be0c12
 run_queue
be0c12
be0c12
Assumption in edc027b was that job we first skipped because of active
be0c12
ratelimit is still in run_queue. Hence we trigger the queue and dispatch
be0c12
it in the next iteration. Actually we remove jobs from run_queue in
be0c12
job_run_and_invalidate() before we call unit_start(). Hence if we want
be0c12
to attempt to run the job again in the future we need to add it back
be0c12
to run_queue.
be0c12
be0c12
Fixes #21458
be0c12
be0c12
(cherry picked from commit c29e6a9530316823b0455cd83eb6d0bb8dd664f4)
be0c12
be0c12
Related: #2036608
be0c12
---
be0c12
 src/core/mount.c | 10 ++++++++++
be0c12
 1 file changed, 10 insertions(+)
be0c12
be0c12
diff --git a/src/core/mount.c b/src/core/mount.c
be0c12
index c17154cde1..691b23ca74 100644
be0c12
--- a/src/core/mount.c
be0c12
+++ b/src/core/mount.c
be0c12
@@ -1712,9 +1712,19 @@ static bool mount_is_mounted(Mount *m) {
be0c12
 
be0c12
 static int mount_on_ratelimit_expire(sd_event_source *s, void *userdata) {
be0c12
         Manager *m = userdata;
be0c12
+        Job *j;
be0c12
+        Iterator i;
be0c12
 
be0c12
         assert(m);
be0c12
 
be0c12
+        /* Let's enqueue all start jobs that were previously skipped because of active ratelimit. */
be0c12
+        HASHMAP_FOREACH(j, m->jobs, i) {
be0c12
+                if (j->unit->type != UNIT_MOUNT)
be0c12
+                        continue;
be0c12
+
be0c12
+                job_add_to_run_queue(j);
be0c12
+        }
be0c12
+
be0c12
         /* By entering ratelimited state we made all mount start jobs not runnable, now rate limit is over so
be0c12
          * let's make sure we dispatch them in the next iteration. */
be0c12
         manager_trigger_run_queue(m);