Blob Blame History Raw
From: Joe Lawrence <joe.lawrence@redhat.com>
Date: Tue,  6 Jul 2021 13:18:44 -0400
Subject: [kernel team] [EMBARGOED KPATCH 7.9] seq_file: kpatch fix for
	CVE-2021-33909

Kernels:
3.10.0-1160.el7
3.10.0-1160.2.1.el7
3.10.0-1160.2.2.el7
3.10.0-1160.6.1.el7
3.10.0-1160.11.1.el7
3.10.0-1160.15.2.el7
3.10.0-1160.21.1.el7
3.10.0-1160.24.1.el7
3.10.0-1160.25.1.el7
3.10.0-1160.31.1.el7

Changes since last build:
arches: x86_64 ppc64le
seq_file.o: changed function: seq_read
seq_file.o: changed function: single_open_size
seq_file.o: changed function: traverse
---------------------------

Kernels:
3.10.0-1160.el7
3.10.0-1160.2.1.el7
3.10.0-1160.2.2.el7
3.10.0-1160.6.1.el7
3.10.0-1160.11.1.el7
3.10.0-1160.15.2.el7
3.10.0-1160.21.1.el7
3.10.0-1160.24.1.el7
3.10.0-1160.25.1.el7
3.10.0-1160.31.1.el7

Modifications:
- inline PAGE_CACHE_SHIFT rather than including linux/pagemap.h and
  fighting kABI fallout (and potentially more inadvertent changes)

commit 1236d5dd5b9f13ccbb44979a5652a4b137b968a4
Author: Ian Kent <ikent@redhat.com>
Date:   Thu Jul 1 09:13:59 2021 +0800

    seq_file: Disallow extremely large seq buffer allocations

    Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1975251

    Brew build: https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=37832573

    Testing: The patch has been tested by Qualys and it has been
             confirmed the patch fixes the problem.

    Upstream status: RHEL only (CVE-2021-33909)

    Conflicts: include/fs.h uses PAGE_CACHE_SHIFT in the definition of
      MAX_RW_COUNT which isn't defined in fs/seq_file.c and including
      linux/pagemap.h breaks kabi (since it makes kabi aware of additional
      structs) even though there are no changes to any structures. So the
      include needs to be added and excluded from the kabi calculation.

    Author: Eric Sandeen <sandeen@redhat.com>

    seq_file: Disallow extremely large seq buffer allocations

    There is no reasonable need for a buffer larger than this,
    and it avoids int overflow pitfalls.

    Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
    Signed-off-by: Eric Sandeen <sandeen@redhat.com>

    Signed-off-by: Ian Kent <ikent@redhat.com>

Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
Acked-by: Artem Savkov <asavkov@redhat.com>
Acked-by: Yannick Cote <ycote@redhat.com>
---

Z-MR: https://gitlab.com/redhat/prdsc/rhel/src/kernel-private/rhel-7/-/merge_requests/7

KT0 test PASS: https://beaker.engineering.redhat.com/jobs/5525685
for kpatch-patch-3_10_0-1160-1-7.el7 scratch build:
https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=37846414

 fs/seq_file.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/fs/seq_file.c b/fs/seq_file.c
index bc7a9ec855aa..daef8f4bdbd0 100644
--- a/fs/seq_file.c
+++ b/fs/seq_file.c
@@ -5,6 +5,26 @@
  * initial implementation -- AV, Oct 2001.
  */
 
+/* inline linux/pagemap.h :: PAGE_CACHE_MASK and dependency values */
+
+/* arch/x86/include/asm/page_types.h */
+#ifdef __x86_64__
+# define PAGE_CACHE_MASK	(~((1UL << 12)-1))
+#endif
+
+/* arch/powerpc/include/asm/page.h */
+#ifdef __powerpc64__
+# if defined(CONFIG_PPC_256K_PAGES)
+#  define PAGE_CACHE_MASK	(~((1 << 18) - 1))
+# elif defined(CONFIG_PPC_64K_PAGES)
+#  define PAGE_CACHE_MASK	(~((1 << 16) - 1))
+# elif defined(CONFIG_PPC_16K_PAGES)
+#  define PAGE_CACHE_MASK	(~((1 << 14) - 1))
+# else
+#  define PAGE_CACHE_MASK	(~((1 << 12) - 1))
+# endif
+#endif
+
 #include <linux/fs.h>
 #include <linux/export.h>
 #include <linux/seq_file.h>
@@ -26,6 +46,9 @@ static void seq_set_overflow(struct seq_file *m)
 
 static void *seq_buf_alloc(unsigned long size)
 {
+	if (unlikely(size > MAX_RW_COUNT))
+		return NULL;
+
 	return kvmalloc(size, GFP_KERNEL);
 }
 
-- 
2.26.3