[PATCH 0/5][SRU][B/C] CVE-2018-18397 - tmpfs permissions bypass

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

[PATCH 0/5][SRU][B/C] CVE-2018-18397 - tmpfs permissions bypass

Tyler Hicks-2
https://people.canonical.com/~ubuntu-security/cve/2018/CVE-2018-18397.html

 The userfaultfd implementation in the Linux kernel before 4.19.7 mishandles
 access control for certain UFFDIO_ ioctl calls, as demonstrated by allowing
 local users to write data into holes in a tmpfs file (if the user has
 read-only access to that file, and that file contains holes), related to
 fs/userfaultfd.c and mm/userfaultfd.c.

All but one of these patches are clean cherry picks to Cosmic and Bionic. The
one that required manual backporting was due to minor context changes due to
upstream commit 2cf855837b89d92996cf264713f3bed2bf9b0b4f missing in those
kernels.

I've successfully regression tested these changes by running the
tools/testing/selftests/vm/run_vmtests kernel selftests, which excercise
userfaultfd.

Tyler

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[PATCH 1/5] userfaultfd: use ENOENT instead of EFAULT if the atomic copy user fails

Tyler Hicks-2
From: Andrea Arcangeli <[hidden email]>

Patch series "userfaultfd shmem updates".

Jann found two bugs in the userfaultfd shmem MAP_SHARED backend: the
lack of the VM_MAYWRITE check and the lack of i_size checks.

Then looking into the above we also fixed the MAP_PRIVATE case.

Hugh by source review also found a data loss source if UFFDIO_COPY is
used on shmem MAP_SHARED PROT_READ mappings (the production usages
incidentally run with PROT_READ|PROT_WRITE, so the data loss couldn't
happen in those production usages like with QEMU).

The whole patchset is marked for stable.

We verified QEMU postcopy live migration with guest running on shmem
MAP_PRIVATE run as well as before after the fix of shmem MAP_PRIVATE.
Regardless if it's shmem or hugetlbfs or MAP_PRIVATE or MAP_SHARED, QEMU
unconditionally invokes a punch hole if the guest mapping is filebacked
and a MADV_DONTNEED too (needed to get rid of the MAP_PRIVATE COWs and
for the anon backend).

This patch (of 5):

We internally used EFAULT to communicate with the caller, switch to
ENOENT, so EFAULT can be used as a non internal retval.

Link: http://lkml.kernel.org/r/20181126173452.26955-2-aarcange@...
Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support")
Signed-off-by: Andrea Arcangeli <[hidden email]>
Reviewed-by: Mike Rapoport <[hidden email]>
Reviewed-by: Hugh Dickins <[hidden email]>
Cc: Mike Kravetz <[hidden email]>
Cc: Jann Horn <[hidden email]>
Cc: Peter Xu <[hidden email]>
Cc: "Dr. David Alan Gilbert" <[hidden email]>
Cc: <[hidden email]>
Cc: [hidden email]
Signed-off-by: Andrew Morton <[hidden email]>
Signed-off-by: Linus Torvalds <[hidden email]>

CVE-2018-18397

(cherry picked from commit 9e368259ad988356c4c95150fafd1a06af095d98)
Signed-off-by: Tyler Hicks <[hidden email]>
---
 mm/hugetlb.c     | 2 +-
 mm/shmem.c       | 2 +-
 mm/userfaultfd.c | 6 +++---
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index b5bcf50a7d9f..f7a5539f8ddc 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -4070,7 +4070,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm,
 
  /* fallback to copy_from_user outside mmap_sem */
  if (unlikely(ret)) {
- ret = -EFAULT;
+ ret = -ENOENT;
  *pagep = page;
  /* don't free the page */
  goto out;
diff --git a/mm/shmem.c b/mm/shmem.c
index 0c6023603b25..a8853fe0c8b6 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2307,7 +2307,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
  *pagep = page;
  shmem_inode_unacct_blocks(inode, 1);
  /* don't free the page */
- return -EFAULT;
+ return -ENOENT;
  }
  } else { /* mfill_zeropage_atomic */
  clear_highpage(page);
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 5029f241908f..46c8949e5f8f 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -48,7 +48,7 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm,
 
  /* fallback to copy_from_user outside mmap_sem */
  if (unlikely(ret)) {
- ret = -EFAULT;
+ ret = -ENOENT;
  *pagep = page;
  /* don't free the page */
  goto out;
@@ -274,7 +274,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
 
  cond_resched();
 
- if (unlikely(err == -EFAULT)) {
+ if (unlikely(err == -ENOENT)) {
  up_read(&dst_mm->mmap_sem);
  BUG_ON(!page);
 
@@ -530,7 +530,7 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm,
        src_addr, &page, zeropage);
  cond_resched();
 
- if (unlikely(err == -EFAULT)) {
+ if (unlikely(err == -ENOENT)) {
  void *page_kaddr;
 
  up_read(&dst_mm->mmap_sem);
--
2.7.4


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[PATCH 2/5] userfaultfd: shmem: allocate anonymous memory for MAP_PRIVATE shmem

Tyler Hicks-2
In reply to this post by Tyler Hicks-2
From: Andrea Arcangeli <[hidden email]>

Userfaultfd did not create private memory when UFFDIO_COPY was invoked
on a MAP_PRIVATE shmem mapping.  Instead it wrote to the shmem file,
even when that had not been opened for writing.  Though, fortunately,
that could only happen where there was a hole in the file.

Fix the shmem-backed implementation of UFFDIO_COPY to create private
memory for MAP_PRIVATE mappings.  The hugetlbfs-backed implementation
was already correct.

This change is visible to userland, if userfaultfd has been used in
unintended ways: so it introduces a small risk of incompatibility, but
is necessary in order to respect file permissions.

An app that uses UFFDIO_COPY for anything like postcopy live migration
won't notice the difference, and in fact it'll run faster because there
will be no copy-on-write and memory waste in the tmpfs pagecache
anymore.

Userfaults on MAP_PRIVATE shmem keep triggering only on file holes like
before.

The real zeropage can also be built on a MAP_PRIVATE shmem mapping
through UFFDIO_ZEROPAGE and that's safe because the zeropage pte is
never dirty, in turn even an mprotect upgrading the vma permission from
PROT_READ to PROT_READ|PROT_WRITE won't make the zeropage pte writable.

Link: http://lkml.kernel.org/r/20181126173452.26955-3-aarcange@...
Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support")
Signed-off-by: Andrea Arcangeli <[hidden email]>
Reported-by: Mike Rapoport <[hidden email]>
Reviewed-by: Hugh Dickins <[hidden email]>
Cc: <[hidden email]>
Cc: "Dr. David Alan Gilbert" <[hidden email]>
Cc: Jann Horn <[hidden email]>
Cc: Mike Kravetz <[hidden email]>
Cc: Peter Xu <[hidden email]>
Cc: [hidden email]
Signed-off-by: Andrew Morton <[hidden email]>
Signed-off-by: Linus Torvalds <[hidden email]>

CVE-2018-18397

(cherry picked from commit 5b51072e97d587186c2f5390c8c9c1fb7e179505)
Signed-off-by: Tyler Hicks <[hidden email]>
---
 mm/userfaultfd.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 46c8949e5f8f..471b6457f95f 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -380,7 +380,17 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm,
 {
  ssize_t err;
 
- if (vma_is_anonymous(dst_vma)) {
+ /*
+ * The normal page fault path for a shmem will invoke the
+ * fault, fill the hole in the file and COW it right away. The
+ * result generates plain anonymous memory. So when we are
+ * asked to fill an hole in a MAP_PRIVATE shmem mapping, we'll
+ * generate anonymous memory directly without actually filling
+ * the hole. For the MAP_PRIVATE case the robustness check
+ * only happens in the pagetable (to verify it's still none)
+ * and not in the radix tree.
+ */
+ if (!(dst_vma->vm_flags & VM_SHARED)) {
  if (!zeropage)
  err = mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma,
        dst_addr, src_addr, page);
@@ -489,7 +499,8 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm,
  * dst_vma.
  */
  err = -ENOMEM;
- if (vma_is_anonymous(dst_vma) && unlikely(anon_vma_prepare(dst_vma)))
+ if (!(dst_vma->vm_flags & VM_SHARED) &&
+    unlikely(anon_vma_prepare(dst_vma)))
  goto out_unlock;
 
  while (src_addr < src_start + len) {
--
2.7.4


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[PATCH 3/5] userfaultfd: shmem/hugetlbfs: only allow to register VM_MAYWRITE vmas

Tyler Hicks-2
In reply to this post by Tyler Hicks-2
From: Andrea Arcangeli <[hidden email]>

After the VMA to register the uffd onto is found, check that it has
VM_MAYWRITE set before allowing registration.  This way we inherit all
common code checks before allowing to fill file holes in shmem and
hugetlbfs with UFFDIO_COPY.

The userfaultfd memory model is not applicable for readonly files unless
it's a MAP_PRIVATE.

Link: http://lkml.kernel.org/r/20181126173452.26955-4-aarcange@...
Fixes: ff62a3421044 ("hugetlb: implement memfd sealing")
Signed-off-by: Andrea Arcangeli <[hidden email]>
Reviewed-by: Mike Rapoport <[hidden email]>
Reviewed-by: Hugh Dickins <[hidden email]>
Reported-by: Jann Horn <[hidden email]>
Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support")
Cc: <[hidden email]>
Cc: "Dr. David Alan Gilbert" <[hidden email]>
Cc: Mike Kravetz <[hidden email]>
Cc: Peter Xu <[hidden email]>
Cc: [hidden email]
Signed-off-by: Andrew Morton <[hidden email]>
Signed-off-by: Linus Torvalds <[hidden email]>

CVE-2018-18397

(cherry picked from commit 29ec90660d68bbdd69507c1c8b4e33aa299278b1)
Signed-off-by: Tyler Hicks <[hidden email]>
---
 fs/userfaultfd.c | 15 +++++++++++++++
 mm/userfaultfd.c | 15 ++++++---------
 2 files changed, 21 insertions(+), 9 deletions(-)

diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index bad9cea37f12..49b638bca32d 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -1363,6 +1363,19 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
  ret = -EINVAL;
  if (!vma_can_userfault(cur))
  goto out_unlock;
+
+ /*
+ * UFFDIO_COPY will fill file holes even without
+ * PROT_WRITE. This check enforces that if this is a
+ * MAP_SHARED, the process has write permission to the backing
+ * file. If VM_MAYWRITE is set it also enforces that on a
+ * MAP_SHARED vma: there is no F_WRITE_SEAL and no further
+ * F_WRITE_SEAL can be taken until the vma is destroyed.
+ */
+ ret = -EPERM;
+ if (unlikely(!(cur->vm_flags & VM_MAYWRITE)))
+ goto out_unlock;
+
  /*
  * If this vma contains ending address, and huge pages
  * check alignment.
@@ -1408,6 +1421,7 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
  BUG_ON(!vma_can_userfault(vma));
  BUG_ON(vma->vm_userfaultfd_ctx.ctx &&
        vma->vm_userfaultfd_ctx.ctx != ctx);
+ WARN_ON(!(vma->vm_flags & VM_MAYWRITE));
 
  /*
  * Nothing to do: this vma is already registered into this
@@ -1554,6 +1568,7 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
  cond_resched();
 
  BUG_ON(!vma_can_userfault(vma));
+ WARN_ON(!(vma->vm_flags & VM_MAYWRITE));
 
  /*
  * Nothing to do: this vma is already registered into this
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 471b6457f95f..43cf314cfddd 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -205,8 +205,9 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
  if (!dst_vma || !is_vm_hugetlb_page(dst_vma))
  goto out_unlock;
  /*
- * Only allow __mcopy_atomic_hugetlb on userfaultfd
- * registered ranges.
+ * Check the vma is registered in uffd, this is
+ * required to enforce the VM_MAYWRITE check done at
+ * uffd registration time.
  */
  if (!dst_vma->vm_userfaultfd_ctx.ctx)
  goto out_unlock;
@@ -459,13 +460,9 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm,
  if (!dst_vma)
  goto out_unlock;
  /*
- * Be strict and only allow __mcopy_atomic on userfaultfd
- * registered ranges to prevent userland errors going
- * unnoticed. As far as the VM consistency is concerned, it
- * would be perfectly safe to remove this check, but there's
- * no useful usage for __mcopy_atomic ouside of userfaultfd
- * registered ranges. This is after all why these are ioctls
- * belonging to the userfaultfd and not syscalls.
+ * Check the vma is registered in uffd, this is required to
+ * enforce the VM_MAYWRITE check done at uffd registration
+ * time.
  */
  if (!dst_vma->vm_userfaultfd_ctx.ctx)
  goto out_unlock;
--
2.7.4


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[PATCH 4/5] userfaultfd: shmem: add i_size checks

Tyler Hicks-2
In reply to this post by Tyler Hicks-2
From: Andrea Arcangeli <[hidden email]>

With MAP_SHARED: recheck the i_size after taking the PT lock, to
serialize against truncate with the PT lock.  Delete the page from the
pagecache if the i_size_read check fails.

With MAP_PRIVATE: check the i_size after the PT lock before mapping
anonymous memory or zeropages into the MAP_PRIVATE shmem mapping.

A mostly irrelevant cleanup: like we do the delete_from_page_cache()
pagecache removal after dropping the PT lock, the PT lock is a spinlock
so drop it before the sleepable page lock.

Link: http://lkml.kernel.org/r/20181126173452.26955-5-aarcange@...
Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support")
Signed-off-by: Andrea Arcangeli <[hidden email]>
Reviewed-by: Mike Rapoport <[hidden email]>
Reviewed-by: Hugh Dickins <[hidden email]>
Reported-by: Jann Horn <[hidden email]>
Cc: <[hidden email]>
Cc: "Dr. David Alan Gilbert" <[hidden email]>
Cc: Mike Kravetz <[hidden email]>
Cc: Peter Xu <[hidden email]>
Cc: [hidden email]
Signed-off-by: Andrew Morton <[hidden email]>
Signed-off-by: Linus Torvalds <[hidden email]>

CVE-2018-18397

(backported from commit e2a50c1f64145a04959df2442305d57307e5395a)
Signed-off-by: Tyler Hicks <[hidden email]>
---
 mm/shmem.c       | 18 ++++++++++++++++--
 mm/userfaultfd.c | 26 ++++++++++++++++++++++++--
 2 files changed, 40 insertions(+), 4 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index a8853fe0c8b6..fb19d2e2fa11 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2285,6 +2285,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
  struct page *page;
  pte_t _dst_pte, *dst_pte;
  int ret;
+ pgoff_t offset, max_off;
 
  ret = -ENOMEM;
  if (!shmem_inode_acct_block(inode, 1))
@@ -2322,6 +2323,12 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
  __SetPageSwapBacked(page);
  __SetPageUptodate(page);
 
+ ret = -EFAULT;
+ offset = linear_page_index(dst_vma, dst_addr);
+ max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
+ if (unlikely(offset >= max_off))
+ goto out_release;
+
  ret = mem_cgroup_try_charge(page, dst_mm, gfp, &memcg, false);
  if (ret)
  goto out_release;
@@ -2340,8 +2347,14 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
  if (dst_vma->vm_flags & VM_WRITE)
  _dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte));
 
- ret = -EEXIST;
  dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl);
+
+ ret = -EFAULT;
+ max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
+ if (unlikely(offset >= max_off))
+ goto out_release_uncharge_unlock;
+
+ ret = -EEXIST;
  if (!pte_none(*dst_pte))
  goto out_release_uncharge_unlock;
 
@@ -2359,13 +2372,14 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
 
  /* No need to invalidate - it was non-present before */
  update_mmu_cache(dst_vma, dst_addr, dst_pte);
- unlock_page(page);
  pte_unmap_unlock(dst_pte, ptl);
+ unlock_page(page);
  ret = 0;
 out:
  return ret;
 out_release_uncharge_unlock:
  pte_unmap_unlock(dst_pte, ptl);
+ delete_from_page_cache(page);
 out_release_uncharge:
  mem_cgroup_cancel_charge(page, memcg, false);
 out_release:
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 43cf314cfddd..458acda96f20 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -33,6 +33,8 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm,
  void *page_kaddr;
  int ret;
  struct page *page;
+ pgoff_t offset, max_off;
+ struct inode *inode;
 
  if (!*pagep) {
  ret = -ENOMEM;
@@ -73,8 +75,17 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm,
  if (dst_vma->vm_flags & VM_WRITE)
  _dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte));
 
- ret = -EEXIST;
  dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl);
+ if (dst_vma->vm_file) {
+ /* the shmem MAP_PRIVATE case requires checking the i_size */
+ inode = dst_vma->vm_file->f_inode;
+ offset = linear_page_index(dst_vma, dst_addr);
+ max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
+ ret = -EFAULT;
+ if (unlikely(offset >= max_off))
+ goto out_release_uncharge_unlock;
+ }
+ ret = -EEXIST;
  if (!pte_none(*dst_pte))
  goto out_release_uncharge_unlock;
 
@@ -108,11 +119,22 @@ static int mfill_zeropage_pte(struct mm_struct *dst_mm,
  pte_t _dst_pte, *dst_pte;
  spinlock_t *ptl;
  int ret;
+ pgoff_t offset, max_off;
+ struct inode *inode;
 
  _dst_pte = pte_mkspecial(pfn_pte(my_zero_pfn(dst_addr),
  dst_vma->vm_page_prot));
- ret = -EEXIST;
  dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl);
+ if (dst_vma->vm_file) {
+ /* the shmem MAP_PRIVATE case requires checking the i_size */
+ inode = dst_vma->vm_file->f_inode;
+ offset = linear_page_index(dst_vma, dst_addr);
+ max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
+ ret = -EFAULT;
+ if (unlikely(offset >= max_off))
+ goto out_unlock;
+ }
+ ret = -EEXIST;
  if (!pte_none(*dst_pte))
  goto out_unlock;
  set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte);
--
2.7.4


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[PATCH 5/5] userfaultfd: shmem: UFFDIO_COPY: set the page dirty if VM_WRITE is not set

Tyler Hicks-2
In reply to this post by Tyler Hicks-2
From: Andrea Arcangeli <[hidden email]>

Set the page dirty if VM_WRITE is not set because in such case the pte
won't be marked dirty and the page would be reclaimed without writepage
(i.e.  swapout in the shmem case).

This was found by source review.  Most apps (certainly including QEMU)
only use UFFDIO_COPY on PROT_READ|PROT_WRITE mappings or the app can't
modify the memory in the first place.  This is for correctness and it
could help the non cooperative use case to avoid unexpected data loss.

Link: http://lkml.kernel.org/r/20181126173452.26955-6-aarcange@...
Reviewed-by: Hugh Dickins <[hidden email]>
Cc: [hidden email]
Fixes: 4c27fe4c4c84 ("userfaultfd: shmem: add shmem_mcopy_atomic_pte for userfaultfd support")
Reported-by: Hugh Dickins <[hidden email]>
Signed-off-by: Andrea Arcangeli <[hidden email]>
Cc: "Dr. David Alan Gilbert" <[hidden email]>
Cc: Jann Horn <[hidden email]>
Cc: Mike Kravetz <[hidden email]>
Cc: Mike Rapoport <[hidden email]>
Cc: Peter Xu <[hidden email]>
Signed-off-by: Andrew Morton <[hidden email]>
Signed-off-by: Linus Torvalds <[hidden email]>

CVE-2018-18397

(cherry picked from commit dcf7fe9d89763a28e0f43975b422ff141fe79e43)
Signed-off-by: Tyler Hicks <[hidden email]>
---
 mm/shmem.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/mm/shmem.c b/mm/shmem.c
index fb19d2e2fa11..67b7fa08036a 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2346,6 +2346,16 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
  _dst_pte = mk_pte(page, dst_vma->vm_page_prot);
  if (dst_vma->vm_flags & VM_WRITE)
  _dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte));
+ else {
+ /*
+ * We don't set the pte dirty if the vma has no
+ * VM_WRITE permission, so mark the page dirty or it
+ * could be freed from under us. We could do it
+ * unconditionally before unlock_page(), but doing it
+ * only if VM_WRITE is not set is faster.
+ */
+ set_page_dirty(page);
+ }
 
  dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl);
 
@@ -2379,6 +2389,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
  return ret;
 out_release_uncharge_unlock:
  pte_unmap_unlock(dst_pte, ptl);
+ ClearPageDirty(page);
  delete_from_page_cache(page);
 out_release_uncharge:
  mem_cgroup_cancel_charge(page, memcg, false);
--
2.7.4


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

ACK: [PATCH 0/5][SRU][B/C] CVE-2018-18397 - tmpfs permissions bypass

Stefan Bader-2
In reply to this post by Tyler Hicks-2
On 25.01.19 03:01, Tyler Hicks wrote:

> https://people.canonical.com/~ubuntu-security/cve/2018/CVE-2018-18397.html
>
>  The userfaultfd implementation in the Linux kernel before 4.19.7 mishandles
>  access control for certain UFFDIO_ ioctl calls, as demonstrated by allowing
>  local users to write data into holes in a tmpfs file (if the user has
>  read-only access to that file, and that file contains holes), related to
>  fs/userfaultfd.c and mm/userfaultfd.c.
>
> All but one of these patches are clean cherry picks to Cosmic and Bionic. The
> one that required manual backporting was due to minor context changes due to
> upstream commit 2cf855837b89d92996cf264713f3bed2bf9b0b4f missing in those
> kernels.
>
> I've successfully regression tested these changes by running the
> tools/testing/selftests/vm/run_vmtests kernel selftests, which excercise
> userfaultfd.
>
> Tyler
>
Cherry picks (mostly) and tested.

Acked-by: Stefan Bader <[hidden email]>


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

ACK: [PATCH 0/5][SRU][B/C] CVE-2018-18397 - tmpfs permissions bypass

Kleber Souza
In reply to this post by Tyler Hicks-2
On 1/25/19 3:01 AM, Tyler Hicks wrote:

> https://people.canonical.com/~ubuntu-security/cve/2018/CVE-2018-18397.html
>
>  The userfaultfd implementation in the Linux kernel before 4.19.7 mishandles
>  access control for certain UFFDIO_ ioctl calls, as demonstrated by allowing
>  local users to write data into holes in a tmpfs file (if the user has
>  read-only access to that file, and that file contains holes), related to
>  fs/userfaultfd.c and mm/userfaultfd.c.
>
> All but one of these patches are clean cherry picks to Cosmic and Bionic. The
> one that required manual backporting was due to minor context changes due to
> upstream commit 2cf855837b89d92996cf264713f3bed2bf9b0b4f missing in those
> kernels.
>
> I've successfully regression tested these changes by running the
> tools/testing/selftests/vm/run_vmtests kernel selftests, which excercise
> userfaultfd.
>
> Tyler
>
Acked-by: Kleber Sacilotto de Souza <[hidden email]>


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

APPLIED: [PATCH 0/5][SRU][B/C] CVE-2018-18397 - tmpfs permissions bypass

Khaled Elmously
In reply to this post by Tyler Hicks-2
On 2019-01-25 02:01:21 , Tyler Hicks wrote:

> https://people.canonical.com/~ubuntu-security/cve/2018/CVE-2018-18397.html
>
>  The userfaultfd implementation in the Linux kernel before 4.19.7 mishandles
>  access control for certain UFFDIO_ ioctl calls, as demonstrated by allowing
>  local users to write data into holes in a tmpfs file (if the user has
>  read-only access to that file, and that file contains holes), related to
>  fs/userfaultfd.c and mm/userfaultfd.c.
>
> All but one of these patches are clean cherry picks to Cosmic and Bionic. The
> one that required manual backporting was due to minor context changes due to
> upstream commit 2cf855837b89d92996cf264713f3bed2bf9b0b4f missing in those
> kernels.
>
> I've successfully regression tested these changes by running the
> tools/testing/selftests/vm/run_vmtests kernel selftests, which excercise
> userfaultfd.
>
> Tyler
>
> --
> kernel-team mailing list
> [hidden email]
> https://lists.ubuntu.com/mailman/listinfo/kernel-team

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team