From 909e24465c59ba4df4124d7caa0730f609c33369 Mon Sep 17 00:00:00 2001
From: Joe Lawrence <joe.lawrence@redhat.com>
Date: Tue, 26 Oct 2021 18:00:19 -0400
Subject: [KPATCH CVE-2020-36385] RDMA/ucma: kpatch fixes for CVE-2020-36385
Changes since last build:
arches: x86_64 ppc64le
ucma.o: changed function: ucma_migrate_id
---------------------------
Kernels:
4.18.0-305.el8
4.18.0-305.3.1.el8_4
4.18.0-305.7.1.el8_4
4.18.0-305.10.2.el8_4
4.18.0-305.12.1.el8_4
4.18.0-305.17.1.el8_4
4.18.0-305.19.1.el8_4
Modifications:
Kpatch-MR: https://gitlab.com/redhat/prdsc/rhel/src/kpatch/rhel-8/-/merge_requests/1
Approved-by: Artem Savkov (@artem.savkov)
- Avoid the complications of reworking all the locks (and preceding
commits) and apply a minimal patch to avoid the CVE condition.
- Always inline ucma_unlock_files() to avoid new function on x64_64
Z-MR: https://gitlab.com/redhat/rhel/src/kernel/rhel-8/-/merge_requests/961
KT0 test PASS: https://beaker.engineering.redhat.com/jobs/5944203
for kpatch-patch-4_18_0-305-1-6.el8 scratch build:
https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=40633078
commit 15152e7e191054883e2859892e77b442c531d1e1
Author: Kamal Heib <kheib@redhat.com>
Date: Sun Mar 7 11:30:04 2021 +0200
RDMA/ucma: Rework ucma_migrate_id() to avoid races with destroy
Bugzilla: https://bugzilla.redhat.com/1982040
CVE: CVE-2020-36385
Y-Commit: 20c1e291ce96ca474f5e94272d2e6511c6a38d58
O-Bugzilla: http://bugzilla.redhat.com/1931846
O-CVE: CVE-2020-36385
commit f5449e74802c1112dea984aec8af7a33c4516af1
Author: Jason Gunthorpe <jgg@nvidia.com>
Date: Mon Sep 14 08:59:56 2020 -0300
RDMA/ucma: Rework ucma_migrate_id() to avoid races with destroy
ucma_destroy_id() assumes that all things accessing the ctx will do so via
the xarray. This assumption violated only in the case the FD is being
closed, then the ctx is reached via the ctx_list. Normally this is OK
since ucma_destroy_id() cannot run concurrenty with release(), however
with ucma_migrate_id() is involved this can violated as the close of the
2nd FD can run concurrently with destroy on the first:
CPU0 CPU1
ucma_destroy_id(fda)
ucma_migrate_id(fda -> fdb)
ucma_get_ctx()
xa_lock()
_ucma_find_context()
xa_erase()
xa_unlock()
xa_lock()
ctx->file = new_file
list_move()
xa_unlock()
ucma_put_ctx()
ucma_close(fdb)
_destroy_id()
kfree(ctx)
_destroy_id()
wait_for_completion()
// boom, ctx was freed
The ctx->file must be modified under the handler and xa_lock, and prior to
modification the ID must be rechecked that it is still reachable from
cur_file, ie there is no parallel destroy or migrate.
To make this work remove the double locking and streamline the control
flow. The double locking was obsoleted by the handler lock now directly
preventing new uevents from being created, and the ctx_list cannot be read
while holding fgets on both files. Removing the double locking also
removes the need to check for the same file.
Fixes: 88314e4dda1e ("RDMA/cma: add support for rdma_migrate_id()")
Link: https://lore.kernel.org/r/0-v1-05c5a4090305+3a872-ucma_syz_migrate_jgg@nvidia.com
Reported-and-tested-by: syzbot+cc6fc752b3819e082d0c@syzkaller.appspotmail.com
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Kamal Heib <kheib@redhat.com>
Signed-off-by: Joe Lawrence <joe.lawrence@redhat.com>
---
drivers/infiniband/core/ucma.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
index 138e0096ca8e..2cd4277bf4a8 100644
--- a/drivers/infiniband/core/ucma.c
+++ b/drivers/infiniband/core/ucma.c
@@ -1626,7 +1626,7 @@ static void ucma_lock_files(struct ucma_file *file1, struct ucma_file *file2)
}
}
-static void ucma_unlock_files(struct ucma_file *file1, struct ucma_file *file2)
+static __always_inline void ucma_unlock_files(struct ucma_file *file1, struct ucma_file *file2)
{
if (file1 < file2) {
mutex_unlock(&file2->mut);
@@ -1689,6 +1689,14 @@ static ssize_t ucma_migrate_id(struct ucma_file *new_file,
ucma_lock_files(cur_file, new_file);
xa_lock(&ctx_table);
+ /* CVE-2020-36385 kpatch: double check the context one last time */
+ if (_ucma_find_context(cmd.id, cur_file) != ctx) {
+ xa_unlock(&ctx_table);
+ ucma_unlock_files(cur_file, new_file);
+ ret = -ENOENT;
+ goto err_unlock;
+ }
+
list_move_tail(&ctx->list, &new_file->ctx_list);
ucma_move_events(ctx, new_file);
ctx->file = new_file;
@@ -1702,6 +1710,7 @@ static ssize_t ucma_migrate_id(struct ucma_file *new_file,
&resp, sizeof(resp)))
ret = -EFAULT;
+err_unlock:
ucma_put_ctx(ctx);
file_put:
fdput(f);
--
2.31.1