Blame SOURCES/kpatch-adapted-patch-for-CVE-2020-10767-and-CVE-2020-10768.patch

ba0766
From 7a3d1bf135b9ef3d9c26c5e48e9200efe932b555 Mon Sep 17 00:00:00 2001
ba0766
From: Artem Savkov <asavkov@redhat.com>
ba0766
Date: Fri, 19 Jun 2020 11:30:09 +0200
ba0766
Subject: [KPATCH RHEL-8.2 v4] kpatch adapted patch for CVE-2020-10767 and CVE-2020-10768
ba0766
ba0766
Kernels:
ba0766
4.18.0-193.el8
ba0766
4.18.0-193.1.2.el8_2
ba0766
4.18.0-193.6.3.el8_2
ba0766
ba0766
Changes since last build:
ba0766
arches: x86_64
ba0766
bugs.o: changed function: arch_prctl_spec_ctrl_get
ba0766
bugs.o: changed function: arch_prctl_spec_ctrl_set
ba0766
bugs.o: changed function: arch_seccomp_spec_mitigate
ba0766
bugs.o: changed function: ib_prctl_set.part.1
ba0766
bugs.o: new function: kpatch_cve_2020_10767_pre_patch_callback
ba0766
---------------------------
ba0766
ba0766
Kernels:
ba0766
4.18.0-193.el8
ba0766
4.18.0-193.1.2.el8_2
ba0766
4.18.0-193.6.3.el8_2
ba0766
ba0766
Modifications:
ba0766
 - Dropped SPECTRE_V2_USER_STRICT_PREFERRED support as it is not a part
ba0766
   of CVE fix and was only ported for patches to apply cleanly.
ba0766
 - Added a pre-patch callback that would initialize spectre_v2_user_ibpb
ba0766
   from scratch, parsing the command line and checking cpu capabilities.
ba0766
 - spectre_v2_user_stibp is not renamed and is left as spectre_v2_user
ba0766
   to limit the footprint of the patch.
ba0766
ba0766
Testing: reproducer provided by security team
ba0766
ba0766
commit eb2ce6c5ce9269a32474955bb0934359801c83fa
ba0766
Author: Waiman Long <longman@redhat.com>
ba0766
Date:   Tue Jun 16 19:01:42 2020 -0400
ba0766
ba0766
    [x86] x86/speculation: PR_SPEC_FORCE_DISABLE enforcement for indirect branches
ba0766
ba0766
    Message-id: <20200616190142.5674-6-longman@redhat.com>
ba0766
    Patchwork-id: 320367
ba0766
    Patchwork-instance: patchwork
ba0766
    O-Subject: [RHEL8.2.z PATCH 5/5] x86/speculation: PR_SPEC_FORCE_DISABLE enforcement for indirect branches.
ba0766
    Bugzilla: 1847396
ba0766
    Z-Bugzilla: 1847395
ba0766
    CVE: CVE-2020-10768
ba0766
    RH-Acked-by: Rafael Aquini <aquini@redhat.com>
ba0766
    RH-Acked-by: Phil Auld <pauld@redhat.com>
ba0766
ba0766
    Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1847395
ba0766
    CVE: CVE-2020-10768
ba0766
ba0766
    commit 4d8df8cbb9156b0a0ab3f802b80cb5db57acc0bf
ba0766
    Author: Anthony Steinhauser <asteinhauser@google.com>
ba0766
    Date:   Sun, 7 Jun 2020 05:44:19 -0700
ba0766
ba0766
        x86/speculation: PR_SPEC_FORCE_DISABLE enforcement for indirect branches.
ba0766
ba0766
        Currently, it is possible to enable indirect branch speculation even after
ba0766
        it was force-disabled using the PR_SPEC_FORCE_DISABLE option. Moreover, the
ba0766
        PR_GET_SPECULATION_CTRL command gives afterwards an incorrect result
ba0766
        (force-disabled when it is in fact enabled). This also is inconsistent
ba0766
        vs. STIBP and the documention which cleary states that
ba0766
        PR_SPEC_FORCE_DISABLE cannot be undone.
ba0766
ba0766
        Fix this by actually enforcing force-disabled indirect branch
ba0766
        speculation. PR_SPEC_ENABLE called after PR_SPEC_FORCE_DISABLE now fails
ba0766
        with -EPERM as described in the documentation.
ba0766
ba0766
        Fixes: 9137bb27e60e ("x86/speculation: Add prctl() control for indirect branch speculation")
ba0766
        Signed-off-by: Anthony Steinhauser <asteinhauser@google.com>
ba0766
        Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
ba0766
        Cc: stable@vger.kernel.org
ba0766
ba0766
    Signed-off-by: Waiman Long <longman@redhat.com>
ba0766
    Signed-off-by: Bruno Meneguele <bmeneg@redhat.com>
ba0766
ba0766
commit b7620b4e8bfb015bf494d47152751b8f07b5b215
ba0766
Author: Waiman Long <longman@redhat.com>
ba0766
Date:   Tue Jun 16 19:01:40 2020 -0400
ba0766
ba0766
    [x86] x86/speculation: Avoid force-disabling IBPB based on STIBP and enhanced IBRS
ba0766
ba0766
    Message-id: <20200616190142.5674-4-longman@redhat.com>
ba0766
    Patchwork-id: 320366
ba0766
    Patchwork-instance: patchwork
ba0766
    O-Subject: [RHEL8.2.z PATCH 3/5] x86/speculation: Avoid force-disabling IBPB based on STIBP and enhanced IBRS.
ba0766
    Bugzilla: 1847379
ba0766
    Z-Bugzilla: 1847378
ba0766
    CVE: CVE-2020-10767
ba0766
    RH-Acked-by: Rafael Aquini <aquini@redhat.com>
ba0766
    RH-Acked-by: Phil Auld <pauld@redhat.com>
ba0766
ba0766
    Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1847378
ba0766
    CVE: CVE-2020-10767
ba0766
    Conflicts: There is a minor fuzz in bugs.c due to missing the upstream
ba0766
               commit e4f358916d52 ("x86, modpost: Replace last remnants
ba0766
               of RETPOLINE with CONFIG_RETPOLINE"). However, backporting
ba0766
               this commit at this stage will cause unexpected retpoline
ba0766
               warning when loading 3rd party modules.
ba0766
ba0766
    commit 21998a351512eba4ed5969006f0c55882d995ada
ba0766
    Author: Anthony Steinhauser <asteinhauser@google.com>
ba0766
    Date:   Tue, 19 May 2020 06:40:42 -0700
ba0766
ba0766
        x86/speculation: Avoid force-disabling IBPB based on STIBP and enhanced IBRS.
ba0766
ba0766
        When STIBP is unavailable or enhanced IBRS is available, Linux
ba0766
        force-disables the IBPB mitigation of Spectre-BTB even when simultaneous
ba0766
        multithreading is disabled. While attempts to enable IBPB using
ba0766
        prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, ...) fail with
ba0766
        EPERM, the seccomp syscall (or its prctl(PR_SET_SECCOMP, ...) equivalent)
ba0766
        which are used e.g. by Chromium or OpenSSH succeed with no errors but the
ba0766
        application remains silently vulnerable to cross-process Spectre v2 attacks
ba0766
        (classical BTB poisoning). At the same time the SYSFS reporting
ba0766
        (/sys/devices/system/cpu/vulnerabilities/spectre_v2) displays that IBPB is
ba0766
        conditionally enabled when in fact it is unconditionally disabled.
ba0766
ba0766
        STIBP is useful only when SMT is enabled. When SMT is disabled and STIBP is
ba0766
        unavailable, it makes no sense to force-disable also IBPB, because IBPB
ba0766
        protects against cross-process Spectre-BTB attacks regardless of the SMT
ba0766
        state. At the same time since missing STIBP was only observed on AMD CPUs,
ba0766
        AMD does not recommend using STIBP, but recommends using IBPB, so disabling
ba0766
        IBPB because of missing STIBP goes directly against AMD's advice:
ba0766
        https://developer.amd.com/wp-content/resources/Architecture_Guidelines_Update_Indirect_Branch_Control.pdf
ba0766
ba0766
        Similarly, enhanced IBRS is designed to protect cross-core BTB poisoning
ba0766
        and BTB-poisoning attacks from user space against kernel (and
ba0766
        BTB-poisoning attacks from guest against hypervisor), it is not designed
ba0766
        to prevent cross-process (or cross-VM) BTB poisoning between processes (or
ba0766
        VMs) running on the same core. Therefore, even with enhanced IBRS it is
ba0766
        necessary to flush the BTB during context-switches, so there is no reason
ba0766
        to force disable IBPB when enhanced IBRS is available.
ba0766
ba0766
        Enable the prctl control of IBPB even when STIBP is unavailable or enhanced
ba0766
        IBRS is available.
ba0766
ba0766
        Fixes: 7cc765a67d8e ("x86/speculation: Enable prctl mode for spectre_v2_user")
ba0766
        Signed-off-by: Anthony Steinhauser <asteinhauser@google.com>
ba0766
        Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
ba0766
        Cc: stable@vger.kernel.org
ba0766
ba0766
    Signed-off-by: Waiman Long <longman@redhat.com>
ba0766
    Signed-off-by: Bruno Meneguele <bmeneg@redhat.com>
ba0766
ba0766
Signed-off-by: Artem Savkov <asavkov@redhat.com>
ba0766
Acked-by: Julien Thierry <jthierry@redhat.com>
ba0766
Acked-by: Joe Lawrence <joe.lawrence@redhat.com>
ba0766
---
ba0766
 arch/x86/kernel/cpu/bugs.c | 158 +++++++++++++++++++++++++++++++++----
ba0766
 1 file changed, 144 insertions(+), 14 deletions(-)
ba0766
ba0766
diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
ba0766
index 0316c7a04457..654c9a779caf 100644
ba0766
--- a/arch/x86/kernel/cpu/bugs.c
ba0766
+++ b/arch/x86/kernel/cpu/bugs.c
ba0766
@@ -589,6 +589,9 @@ static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
ba0766
 static enum spectre_v2_user_mitigation spectre_v2_user __ro_after_init =
ba0766
 	SPECTRE_V2_USER_NONE;
ba0766
 
ba0766
+static enum spectre_v2_user_mitigation spectre_v2_user_ibpb =
ba0766
+	SPECTRE_V2_USER_NONE;
ba0766
+
ba0766
 #ifdef RETPOLINE
ba0766
 static bool spectre_v2_bad_module;
ba0766
 
ba0766
@@ -1291,13 +1294,17 @@ static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)
ba0766
 {
ba0766
 	switch (ctrl) {
ba0766
 	case PR_SPEC_ENABLE:
ba0766
-		if (spectre_v2_user == SPECTRE_V2_USER_NONE)
ba0766
+		if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE &&
ba0766
+		    spectre_v2_user == SPECTRE_V2_USER_NONE)
ba0766
 			return 0;
ba0766
 		/*
ba0766
 		 * Indirect branch speculation is always disabled in strict
ba0766
-		 * mode.
ba0766
+		 * mode. It can neither be enabled if it was force-disabled
ba0766
+		 * by a previous prctl call.
ba0766
 		 */
ba0766
-		if (spectre_v2_user == SPECTRE_V2_USER_STRICT)
ba0766
+		if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT ||
ba0766
+		    spectre_v2_user == SPECTRE_V2_USER_STRICT ||
ba0766
+		    task_spec_ib_force_disable(task))
ba0766
 			return -EPERM;
ba0766
 		task_clear_spec_ib_disable(task);
ba0766
 		task_update_spec_tif(task);
ba0766
@@ -1308,9 +1315,11 @@ static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)
ba0766
 		 * Indirect branch speculation is always allowed when
ba0766
 		 * mitigation is force disabled.
ba0766
 		 */
ba0766
-		if (spectre_v2_user == SPECTRE_V2_USER_NONE)
ba0766
+		if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE &&
ba0766
+		    spectre_v2_user == SPECTRE_V2_USER_NONE)
ba0766
 			return -EPERM;
ba0766
-		if (spectre_v2_user == SPECTRE_V2_USER_STRICT)
ba0766
+		if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT ||
ba0766
+		    spectre_v2_user == SPECTRE_V2_USER_STRICT)
ba0766
 			return 0;
ba0766
 		task_set_spec_ib_disable(task);
ba0766
 		if (ctrl == PR_SPEC_FORCE_DISABLE)
ba0766
@@ -1341,7 +1350,8 @@ void arch_seccomp_spec_mitigate(struct task_struct *task)
ba0766
 {
ba0766
 	if (ssb_mode == SPEC_STORE_BYPASS_SECCOMP)
ba0766
 		ssb_prctl_set(task, PR_SPEC_FORCE_DISABLE);
ba0766
-	if (spectre_v2_user == SPECTRE_V2_USER_SECCOMP)
ba0766
+	if (spectre_v2_user_ibpb == SPECTRE_V2_USER_SECCOMP ||
ba0766
+	    spectre_v2_user == SPECTRE_V2_USER_SECCOMP)
ba0766
 		ib_prctl_set(task, PR_SPEC_FORCE_DISABLE);
ba0766
 }
ba0766
 #endif
ba0766
@@ -1372,21 +1382,23 @@ static int ib_prctl_get(struct task_struct *task)
ba0766
 	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
ba0766
 		return PR_SPEC_NOT_AFFECTED;
ba0766
 
ba0766
-	switch (spectre_v2_user) {
ba0766
-	case SPECTRE_V2_USER_NONE:
ba0766
+	if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE &&
ba0766
+	    spectre_v2_user == SPECTRE_V2_USER_NONE)
ba0766
 		return PR_SPEC_ENABLE;
ba0766
-	case SPECTRE_V2_USER_PRCTL:
ba0766
-	case SPECTRE_V2_USER_SECCOMP:
ba0766
+	else if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT ||
ba0766
+	    spectre_v2_user == SPECTRE_V2_USER_STRICT)
ba0766
+		return PR_SPEC_DISABLE;
ba0766
+	else if (spectre_v2_user_ibpb == SPECTRE_V2_USER_PRCTL ||
ba0766
+	    spectre_v2_user_ibpb == SPECTRE_V2_USER_SECCOMP ||
ba0766
+	    spectre_v2_user == SPECTRE_V2_USER_PRCTL ||
ba0766
+	    spectre_v2_user == SPECTRE_V2_USER_SECCOMP) {
ba0766
 		if (task_spec_ib_force_disable(task))
ba0766
 			return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;
ba0766
 		if (task_spec_ib_disable(task))
ba0766
 			return PR_SPEC_PRCTL | PR_SPEC_DISABLE;
ba0766
 		return PR_SPEC_PRCTL | PR_SPEC_ENABLE;
ba0766
-	case SPECTRE_V2_USER_STRICT:
ba0766
-		return PR_SPEC_DISABLE;
ba0766
-	default:
ba0766
+	} else
ba0766
 		return PR_SPEC_NOT_AFFECTED;
ba0766
-	}
ba0766
 }
ba0766
 
ba0766
 int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
ba0766
@@ -1755,3 +1767,121 @@ ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *
ba0766
 	return cpu_show_common(dev, attr, buf, X86_BUG_SRBDS);
ba0766
 }
ba0766
 #endif
ba0766
+
ba0766
+static inline bool kpatch_cve_2020_10767_match_option(const char *arg, int arglen, const char *opt)
ba0766
+{
ba0766
+	int len = strlen(opt);
ba0766
+
ba0766
+	return len == arglen && !strncmp(arg, opt, len);
ba0766
+}
ba0766
+
ba0766
+static enum spectre_v2_mitigation_cmd kpatch_cve_2020_10767_spectre_v2_parse_cmdline(void)
ba0766
+{
ba0766
+	enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_CMD_AUTO;
ba0766
+	char arg[20];
ba0766
+	int ret, i;
ba0766
+
ba0766
+	if (cmdline_find_option_bool(boot_command_line, "nospectre_v2") ||
ba0766
+	    cpu_mitigations_off())
ba0766
+		return SPECTRE_V2_CMD_NONE;
ba0766
+
ba0766
+	ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));
ba0766
+	if (ret < 0)
ba0766
+		return SPECTRE_V2_CMD_AUTO;
ba0766
+
ba0766
+	for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) {
ba0766
+		if (!kpatch_cve_2020_10767_match_option(arg, ret, mitigation_options[i].option))
ba0766
+			continue;
ba0766
+		cmd = mitigation_options[i].cmd;
ba0766
+		break;
ba0766
+	}
ba0766
+
ba0766
+	if (i >= ARRAY_SIZE(mitigation_options)) {
ba0766
+		return SPECTRE_V2_CMD_AUTO;
ba0766
+	}
ba0766
+
ba0766
+	if ((cmd == SPECTRE_V2_CMD_RETPOLINE ||
ba0766
+	     cmd == SPECTRE_V2_CMD_RETPOLINE_AMD ||
ba0766
+	     cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC) &&
ba0766
+	    !IS_ENABLED(CONFIG_RETPOLINE)) {
ba0766
+		return SPECTRE_V2_CMD_AUTO;
ba0766
+	}
ba0766
+
ba0766
+	if (cmd == SPECTRE_V2_CMD_RETPOLINE_AMD &&
ba0766
+	    boot_cpu_data.x86_vendor != X86_VENDOR_AMD) {
ba0766
+		return SPECTRE_V2_CMD_AUTO;
ba0766
+	}
ba0766
+
ba0766
+	return cmd;
ba0766
+}
ba0766
+
ba0766
+static enum spectre_v2_user_cmd
ba0766
+kpatch_cve_2020_10767_spectre_v2_parse_user_cmdline(enum spectre_v2_mitigation_cmd v2_cmd)
ba0766
+{
ba0766
+	char arg[20];
ba0766
+	int ret, i;
ba0766
+
ba0766
+	switch (v2_cmd) {
ba0766
+	case SPECTRE_V2_CMD_NONE:
ba0766
+		return SPECTRE_V2_USER_CMD_NONE;
ba0766
+	case SPECTRE_V2_CMD_FORCE:
ba0766
+		return SPECTRE_V2_USER_CMD_FORCE;
ba0766
+	default:
ba0766
+		break;
ba0766
+	}
ba0766
+
ba0766
+	ret = cmdline_find_option(boot_command_line, "spectre_v2_user",
ba0766
+				  arg, sizeof(arg));
ba0766
+	if (ret < 0)
ba0766
+		return SPECTRE_V2_USER_CMD_AUTO;
ba0766
+
ba0766
+	for (i = 0; i < ARRAY_SIZE(v2_user_options); i++) {
ba0766
+		if (kpatch_cve_2020_10767_match_option(arg, ret, v2_user_options[i].option)) {
ba0766
+			return v2_user_options[i].cmd;
ba0766
+		}
ba0766
+	}
ba0766
+
ba0766
+	return SPECTRE_V2_USER_CMD_AUTO;
ba0766
+}
ba0766
+
ba0766
+static void kpatch_cve_2020_10767_spectre_v2_user_select_mitigation(void)
ba0766
+{
ba0766
+	enum spectre_v2_mitigation_cmd v2_cmd = kpatch_cve_2020_10767_spectre_v2_parse_cmdline();
ba0766
+	enum spectre_v2_user_mitigation mode = SPECTRE_V2_USER_NONE;
ba0766
+	enum spectre_v2_user_cmd cmd;
ba0766
+
ba0766
+	if (!boot_cpu_has(X86_FEATURE_IBPB))
ba0766
+		return;
ba0766
+
ba0766
+	cmd = kpatch_cve_2020_10767_spectre_v2_parse_user_cmdline(v2_cmd);
ba0766
+	switch (cmd) {
ba0766
+	case SPECTRE_V2_USER_CMD_NONE:
ba0766
+		return;
ba0766
+	case SPECTRE_V2_USER_CMD_FORCE:
ba0766
+		mode = SPECTRE_V2_USER_STRICT;
ba0766
+		break;
ba0766
+	case SPECTRE_V2_USER_CMD_PRCTL:
ba0766
+	case SPECTRE_V2_USER_CMD_PRCTL_IBPB:
ba0766
+		mode = SPECTRE_V2_USER_PRCTL;
ba0766
+		break;
ba0766
+	case SPECTRE_V2_USER_CMD_AUTO:
ba0766
+	case SPECTRE_V2_USER_CMD_SECCOMP:
ba0766
+	case SPECTRE_V2_USER_CMD_SECCOMP_IBPB:
ba0766
+		if (IS_ENABLED(CONFIG_SECCOMP))
ba0766
+			mode = SPECTRE_V2_USER_SECCOMP;
ba0766
+		else
ba0766
+			mode = SPECTRE_V2_USER_PRCTL;
ba0766
+		break;
ba0766
+	}
ba0766
+
ba0766
+	spectre_v2_user_ibpb = mode;
ba0766
+}
ba0766
+
ba0766
+#include "kpatch-macros.h"
ba0766
+
ba0766
+static int kpatch_cve_2020_10767_pre_patch_callback(struct klp_object *obj)
ba0766
+{
ba0766
+	kpatch_cve_2020_10767_spectre_v2_user_select_mitigation();
ba0766
+	return 0;
ba0766
+}
ba0766
+KPATCH_PRE_PATCH_CALLBACK(kpatch_cve_2020_10767_pre_patch_callback);
ba0766
-- 
ba0766
2.21.3
ba0766