dcavalca / rpms / mdadm

Forked from rpms/mdadm 3 years ago
Clone

Blame SOURCES/0006-imsm-update-metadata-correctly-while-raid10-double-d.patch

5eacff
From d7a1fda2769ba272d89de6caeab35d52b73a9c3c Mon Sep 17 00:00:00 2001
5eacff
From: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
5eacff
Date: Wed, 17 Oct 2018 12:11:41 +0200
5eacff
Subject: [RHEL7.7 PATCH 06/24] imsm: update metadata correctly while raid10
5eacff
 double degradation
1f6b6a
5eacff
Mdmon calls end_migration() when map state changes from normal to
5eacff
degraded. It is not valid because in raid 10 double degradation case
5eacff
mdmon breaks checkpointing but array is still rebuilding.
5eacff
In this case mdmon has to mark map as degraded and continues marking
5eacff
recovery checkpoint in metadata. Migration can be finished only if newly
5eacff
failed device is a rebuilding device.
5eacff
5eacff
Add catching double degraded to degraded transition. Migration is
5eacff
finished but map state doesn't change, array is still degraded.
5eacff
5eacff
Update failed_disk_num correctly. If double degradation
5eacff
happens rebuild will start on the lowest slot, but this variable points
5eacff
to the first failed slot. If second fail happens while rebuild this
5eacff
variable shouldn't be updated until rebuild is not finished.
5eacff
5eacff
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
5eacff
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
5eacff
---
5eacff
 super-intel.c | 25 +++++++++++++++++++------
5eacff
 1 file changed, 19 insertions(+), 6 deletions(-)
1f6b6a
1f6b6a
diff --git a/super-intel.c b/super-intel.c
1f6b6a
index 6438987..d2035cc 100644
1f6b6a
--- a/super-intel.c
1f6b6a
+++ b/super-intel.c
1f6b6a
@@ -8136,7 +8136,8 @@ static int mark_failure(struct intel_super *super,
1f6b6a
 			set_imsm_ord_tbl_ent(map2, slot2,
1f6b6a
 					     idx | IMSM_ORD_REBUILD);
1f6b6a
 	}
1f6b6a
-	if (map->failed_disk_num == 0xff)
1f6b6a
+	if (map->failed_disk_num == 0xff ||
1f6b6a
+		(!is_rebuilding(dev) && map->failed_disk_num > slot))
1f6b6a
 		map->failed_disk_num = slot;
1f6b6a
 
1f6b6a
 	clear_disk_badblocks(super->bbm_log, ord_to_idx(ord));
1f6b6a
@@ -8558,13 +8559,25 @@ static void imsm_set_disk(struct active_array *a, int n, int state)
1f6b6a
 			break;
1f6b6a
 		}
1f6b6a
 		if (is_rebuilding(dev)) {
1f6b6a
-			dprintf_cont("while rebuilding.");
1f6b6a
+			dprintf_cont("while rebuilding ");
1f6b6a
 			if (map->map_state != map_state)  {
1f6b6a
-				dprintf_cont(" Map state change");
1f6b6a
-				end_migration(dev, super, map_state);
1f6b6a
+				dprintf_cont("map state change ");
1f6b6a
+				if (n == map->failed_disk_num) {
1f6b6a
+					dprintf_cont("end migration");
1f6b6a
+					end_migration(dev, super, map_state);
1f6b6a
+				} else {
1f6b6a
+					dprintf_cont("raid10 double degradation, map state change");
1f6b6a
+					map->map_state = map_state;
1f6b6a
+				}
1f6b6a
 				super->updates_pending++;
1f6b6a
-			} else if (!rebuild_done) {
1f6b6a
+			} else if (!rebuild_done)
1f6b6a
 				break;
1f6b6a
+			else if (n == map->failed_disk_num) {
1f6b6a
+				/* r10 double degraded to degraded transition */
1f6b6a
+				dprintf_cont("raid10 double degradation end migration");
1f6b6a
+				end_migration(dev, super, map_state);
1f6b6a
+				a->last_checkpoint = 0;
1f6b6a
+				super->updates_pending++;
1f6b6a
 			}
1f6b6a
 
1f6b6a
 			/* check if recovery is really finished */
1f6b6a
@@ -8575,7 +8588,7 @@ static void imsm_set_disk(struct active_array *a, int n, int state)
1f6b6a
 				}
1f6b6a
 			if (recovery_not_finished) {
1f6b6a
 				dprintf_cont("\n");
1f6b6a
-				dprintf("Rebuild has not finished yet, state not changed");
1f6b6a
+				dprintf_cont("Rebuild has not finished yet, map state changes only if raid10 double degradation happens");
1f6b6a
 				if (a->last_checkpoint < mdi->recovery_start) {
1f6b6a
 					a->last_checkpoint =
1f6b6a
 						mdi->recovery_start;
5eacff
-- 
5eacff
2.7.5
5eacff