[SRU][Xenial][PATCH 0/5] Prevent speculation on user controlled pointer (LP #1775137)

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

[SRU][Xenial][PATCH 0/5] Prevent speculation on user controlled pointer (LP #1775137)

Juerg Haefliger
BugLink: https://bugs.launchpad.net/bugs/1775137

This patchset adds the missing Spectre v1 mitigation for speculating on user controlled pointers.

== SRU Justification ==
Upstream's Spectre v1 mitigation prevents speculation on a user controlled pointer. This part of the Spectre v1 patchset was never backported to 4.4 (for unknown reasons) so Xenial/Trusty/Precise are lacking it as well. All the other stable upstream kernels include it, so add it to our older kernels.

== Fix ==
Backport the following patches:
x86/uaccess: Use __uaccess_begin_nospec() and uaccess_try_nospec
x86/usercopy: Replace open coded stac/clac with __uaccess_{begin, end}
x86: Introduce __uaccess_begin_nospec() and uaccess_try_nospec

== Regression Potential ==
Low. Patches have been in upstream (and other distro kernels) for quite a while now and the changes only introduce a barrier on copy_from_user operations.

== Test Case ==
TBD.

Signed-off-by: Juerg Haefliger <[hidden email]>


Dan Williams (3):
  x86: Introduce __uaccess_begin_nospec() and uaccess_try_nospec
  x86/usercopy: Replace open coded stac/clac with __uaccess_{begin, end}
  x86/uaccess: Use __uaccess_begin_nospec() and uaccess_try_nospec

Linus Torvalds (2):
  x86: reorganize SMAP handling in user space accesses
  x86: fix SMAP in 32-bit environments

 arch/x86/include/asm/uaccess.h    | 64 ++++++++++++++-------
 arch/x86/include/asm/uaccess_32.h | 26 +++++++++
 arch/x86/include/asm/uaccess_64.h | 94 ++++++++++++++++++++++---------
 arch/x86/lib/usercopy_32.c        | 20 +++----
 4 files changed, 147 insertions(+), 57 deletions(-)

--
2.17.1


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[SRU][Xenial][PATCH 1/5] x86: reorganize SMAP handling in user space accesses

Juerg Haefliger
From: Linus Torvalds <[hidden email]>

BugLink: https://bugs.launchpad.net/bugs/1775137

This reorganizes how we do the stac/clac instructions in the user access
code.  Instead of adding the instructions directly to the same inline
asm that does the actual user level access and exception handling, add
them at a higher level.

This is mainly preparation for the next step, where we will expose an
interface to allow users to mark several accesses together as being user
space accesses, but it does already clean up some code:

 - the inlined trivial cases of copy_in_user() now do stac/clac just
   once over the accesses: they used to do one pair around the user
   space read, and another pair around the write-back.

 - the {get,put}_user_ex() macros that are used with the catch/try
   handling don't do any stac/clac at all, because that happens in the
   try/catch surrounding them.

Other than those two cleanups that happened naturally from the
re-organization, this should not make any difference. Yet.

Signed-off-by: Linus Torvalds <[hidden email]>
(cherry picked from commit 11f1a4b9755f5dbc3e822a96502ebe9b044b14d8)
Signed-off-by: Juerg Haefliger <[hidden email]>
---
 arch/x86/include/asm/uaccess.h    | 53 +++++++++++------
 arch/x86/include/asm/uaccess_64.h | 94 ++++++++++++++++++++++---------
 2 files changed, 101 insertions(+), 46 deletions(-)

diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index d788b0cdc0ad..e93a69f9a225 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -144,6 +144,9 @@ extern int __get_user_4(void);
 extern int __get_user_8(void);
 extern int __get_user_bad(void);
 
+#define __uaccess_begin() stac()
+#define __uaccess_end()   clac()
+
 /*
  * This is a type: either unsigned long, if the argument fits into
  * that type, or otherwise unsigned long long.
@@ -203,10 +206,10 @@ __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
 
 #ifdef CONFIG_X86_32
 #define __put_user_asm_u64(x, addr, err, errret) \
- asm volatile(ASM_STAC "\n" \
+ asm volatile("\n" \
      "1: movl %%eax,0(%2)\n" \
      "2: movl %%edx,4(%2)\n" \
-     "3: " ASM_CLAC "\n" \
+     "3:" \
      ".section .fixup,\"ax\"\n" \
      "4: movl %3,%0\n" \
      " jmp 3b\n" \
@@ -217,10 +220,10 @@ __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL))
      : "A" (x), "r" (addr), "i" (errret), "0" (err))
 
 #define __put_user_asm_ex_u64(x, addr) \
- asm volatile(ASM_STAC "\n" \
+ asm volatile("\n" \
      "1: movl %%eax,0(%1)\n" \
      "2: movl %%edx,4(%1)\n" \
-     "3: " ASM_CLAC "\n" \
+     "3:" \
      _ASM_EXTABLE_EX(1b, 2b) \
      _ASM_EXTABLE_EX(2b, 3b) \
      : : "A" (x), "r" (addr))
@@ -314,6 +317,10 @@ do { \
  } \
 } while (0)
 
+/*
+ * This doesn't do __uaccess_begin/end - the exception handling
+ * around it must do that.
+ */
 #define __put_user_size_ex(x, ptr, size) \
 do { \
  __chk_user_ptr(ptr); \
@@ -368,9 +375,9 @@ do { \
 } while (0)
 
 #define __get_user_asm(x, addr, err, itype, rtype, ltype, errret) \
- asm volatile(ASM_STAC "\n" \
+ asm volatile("\n" \
      "1: mov"itype" %2,%"rtype"1\n" \
-     "2: " ASM_CLAC "\n" \
+     "2:\n" \
      ".section .fixup,\"ax\"\n" \
      "3: mov %3,%0\n" \
      " xor"itype" %"rtype"1,%"rtype"1\n" \
@@ -380,6 +387,10 @@ do { \
      : "=r" (err), ltype(x) \
      : "m" (__m(addr)), "i" (errret), "0" (err))
 
+/*
+ * This doesn't do __uaccess_begin/end - the exception handling
+ * around it must do that.
+ */
 #define __get_user_size_ex(x, ptr, size) \
 do { \
  __chk_user_ptr(ptr); \
@@ -410,7 +421,9 @@ do { \
 #define __put_user_nocheck(x, ptr, size) \
 ({ \
  int __pu_err; \
+ __uaccess_begin(); \
  __put_user_size((x), (ptr), (size), __pu_err, -EFAULT); \
+ __uaccess_end(); \
  __builtin_expect(__pu_err, 0); \
 })
 
@@ -418,7 +431,9 @@ do { \
 ({ \
  int __gu_err; \
  unsigned long __gu_val; \
+ __uaccess_begin(); \
  __get_user_size(__gu_val, (ptr), (size), __gu_err, -EFAULT); \
+ __uaccess_end(); \
  (x) = (__force __typeof__(*(ptr)))__gu_val; \
  __builtin_expect(__gu_err, 0); \
 })
@@ -433,9 +448,9 @@ struct __large_struct { unsigned long buf[100]; };
  * aliasing issues.
  */
 #define __put_user_asm(x, addr, err, itype, rtype, ltype, errret) \
- asm volatile(ASM_STAC "\n" \
+ asm volatile("\n" \
      "1: mov"itype" %"rtype"1,%2\n" \
-     "2: " ASM_CLAC "\n" \
+     "2:\n" \
      ".section .fixup,\"ax\"\n" \
      "3: mov %3,%0\n" \
      " jmp 2b\n" \
@@ -455,11 +470,11 @@ struct __large_struct { unsigned long buf[100]; };
  */
 #define uaccess_try do { \
  current_thread_info()->uaccess_err = 0; \
- stac(); \
+ __uaccess_begin(); \
  barrier();
 
 #define uaccess_catch(err) \
- clac(); \
+ __uaccess_end(); \
  (err) |= (current_thread_info()->uaccess_err ? -EFAULT : 0); \
 } while (0)
 
@@ -557,12 +572,13 @@ extern void __cmpxchg_wrong_size(void)
  __typeof__(ptr) __uval = (uval); \
  __typeof__(*(ptr)) __old = (old); \
  __typeof__(*(ptr)) __new = (new); \
+ __uaccess_begin(); \
  switch (size) { \
  case 1: \
  { \
- asm volatile("\t" ASM_STAC "\n" \
+ asm volatile("\n" \
  "1:\t" LOCK_PREFIX "cmpxchgb %4, %2\n" \
- "2:\t" ASM_CLAC "\n" \
+ "2:\n" \
  "\t.section .fixup, \"ax\"\n" \
  "3:\tmov     %3, %0\n" \
  "\tjmp     2b\n" \
@@ -576,9 +592,9 @@ extern void __cmpxchg_wrong_size(void)
  } \
  case 2: \
  { \
- asm volatile("\t" ASM_STAC "\n" \
+ asm volatile("\n" \
  "1:\t" LOCK_PREFIX "cmpxchgw %4, %2\n" \
- "2:\t" ASM_CLAC "\n" \
+ "2:\n" \
  "\t.section .fixup, \"ax\"\n" \
  "3:\tmov     %3, %0\n" \
  "\tjmp     2b\n" \
@@ -592,9 +608,9 @@ extern void __cmpxchg_wrong_size(void)
  } \
  case 4: \
  { \
- asm volatile("\t" ASM_STAC "\n" \
+ asm volatile("\n" \
  "1:\t" LOCK_PREFIX "cmpxchgl %4, %2\n" \
- "2:\t" ASM_CLAC "\n" \
+ "2:\n" \
  "\t.section .fixup, \"ax\"\n" \
  "3:\tmov     %3, %0\n" \
  "\tjmp     2b\n" \
@@ -611,9 +627,9 @@ extern void __cmpxchg_wrong_size(void)
  if (!IS_ENABLED(CONFIG_X86_64)) \
  __cmpxchg_wrong_size(); \
  \
- asm volatile("\t" ASM_STAC "\n" \
+ asm volatile("\n" \
  "1:\t" LOCK_PREFIX "cmpxchgq %4, %2\n" \
- "2:\t" ASM_CLAC "\n" \
+ "2:\n" \
  "\t.section .fixup, \"ax\"\n" \
  "3:\tmov     %3, %0\n" \
  "\tjmp     2b\n" \
@@ -628,6 +644,7 @@ extern void __cmpxchg_wrong_size(void)
  default: \
  __cmpxchg_wrong_size(); \
  } \
+ __uaccess_end(); \
  *__uval = __old; \
  __ret; \
 })
diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
index d83a55b95a48..307698688fa1 100644
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -56,35 +56,49 @@ int __copy_from_user_nocheck(void *dst, const void __user *src, unsigned size)
  if (!__builtin_constant_p(size))
  return copy_user_generic(dst, (__force void *)src, size);
  switch (size) {
- case 1:__get_user_asm(*(u8 *)dst, (u8 __user *)src,
+ case 1:
+ __uaccess_begin();
+ __get_user_asm(*(u8 *)dst, (u8 __user *)src,
       ret, "b", "b", "=q", 1);
+ __uaccess_end();
  return ret;
- case 2:__get_user_asm(*(u16 *)dst, (u16 __user *)src,
+ case 2:
+ __uaccess_begin();
+ __get_user_asm(*(u16 *)dst, (u16 __user *)src,
       ret, "w", "w", "=r", 2);
+ __uaccess_end();
  return ret;
- case 4:__get_user_asm(*(u32 *)dst, (u32 __user *)src,
+ case 4:
+ __uaccess_begin();
+ __get_user_asm(*(u32 *)dst, (u32 __user *)src,
       ret, "l", "k", "=r", 4);
+ __uaccess_end();
  return ret;
- case 8:__get_user_asm(*(u64 *)dst, (u64 __user *)src,
+ case 8:
+ __uaccess_begin();
+ __get_user_asm(*(u64 *)dst, (u64 __user *)src,
       ret, "q", "", "=r", 8);
+ __uaccess_end();
  return ret;
  case 10:
+ __uaccess_begin();
  __get_user_asm(*(u64 *)dst, (u64 __user *)src,
        ret, "q", "", "=r", 10);
- if (unlikely(ret))
- return ret;
- __get_user_asm(*(u16 *)(8 + (char *)dst),
-       (u16 __user *)(8 + (char __user *)src),
-       ret, "w", "w", "=r", 2);
+ if (likely(!ret))
+ __get_user_asm(*(u16 *)(8 + (char *)dst),
+       (u16 __user *)(8 + (char __user *)src),
+       ret, "w", "w", "=r", 2);
+ __uaccess_end();
  return ret;
  case 16:
+ __uaccess_begin();
  __get_user_asm(*(u64 *)dst, (u64 __user *)src,
        ret, "q", "", "=r", 16);
- if (unlikely(ret))
- return ret;
- __get_user_asm(*(u64 *)(8 + (char *)dst),
-       (u64 __user *)(8 + (char __user *)src),
-       ret, "q", "", "=r", 8);
+ if (likely(!ret))
+ __get_user_asm(*(u64 *)(8 + (char *)dst),
+       (u64 __user *)(8 + (char __user *)src),
+       ret, "q", "", "=r", 8);
+ __uaccess_end();
  return ret;
  default:
  return copy_user_generic(dst, (__force void *)src, size);
@@ -106,35 +120,51 @@ int __copy_to_user_nocheck(void __user *dst, const void *src, unsigned size)
  if (!__builtin_constant_p(size))
  return copy_user_generic((__force void *)dst, src, size);
  switch (size) {
- case 1:__put_user_asm(*(u8 *)src, (u8 __user *)dst,
+ case 1:
+ __uaccess_begin();
+ __put_user_asm(*(u8 *)src, (u8 __user *)dst,
       ret, "b", "b", "iq", 1);
+ __uaccess_end();
  return ret;
- case 2:__put_user_asm(*(u16 *)src, (u16 __user *)dst,
+ case 2:
+ __uaccess_begin();
+ __put_user_asm(*(u16 *)src, (u16 __user *)dst,
       ret, "w", "w", "ir", 2);
+ __uaccess_end();
  return ret;
- case 4:__put_user_asm(*(u32 *)src, (u32 __user *)dst,
+ case 4:
+ __uaccess_begin();
+ __put_user_asm(*(u32 *)src, (u32 __user *)dst,
       ret, "l", "k", "ir", 4);
+ __uaccess_end();
  return ret;
- case 8:__put_user_asm(*(u64 *)src, (u64 __user *)dst,
+ case 8:
+ __uaccess_begin();
+ __put_user_asm(*(u64 *)src, (u64 __user *)dst,
       ret, "q", "", "er", 8);
+ __uaccess_end();
  return ret;
  case 10:
+ __uaccess_begin();
  __put_user_asm(*(u64 *)src, (u64 __user *)dst,
        ret, "q", "", "er", 10);
- if (unlikely(ret))
- return ret;
- asm("":::"memory");
- __put_user_asm(4[(u16 *)src], 4 + (u16 __user *)dst,
-       ret, "w", "w", "ir", 2);
+ if (likely(!ret)) {
+ asm("":::"memory");
+ __put_user_asm(4[(u16 *)src], 4 + (u16 __user *)dst,
+       ret, "w", "w", "ir", 2);
+ }
+ __uaccess_end();
  return ret;
  case 16:
+ __uaccess_begin();
  __put_user_asm(*(u64 *)src, (u64 __user *)dst,
        ret, "q", "", "er", 16);
- if (unlikely(ret))
- return ret;
- asm("":::"memory");
- __put_user_asm(1[(u64 *)src], 1 + (u64 __user *)dst,
-       ret, "q", "", "er", 8);
+ if (likely(!ret)) {
+ asm("":::"memory");
+ __put_user_asm(1[(u64 *)src], 1 + (u64 __user *)dst,
+       ret, "q", "", "er", 8);
+ }
+ __uaccess_end();
  return ret;
  default:
  return copy_user_generic((__force void *)dst, src, size);
@@ -160,39 +190,47 @@ int __copy_in_user(void __user *dst, const void __user *src, unsigned size)
  switch (size) {
  case 1: {
  u8 tmp;
+ __uaccess_begin();
  __get_user_asm(tmp, (u8 __user *)src,
        ret, "b", "b", "=q", 1);
  if (likely(!ret))
  __put_user_asm(tmp, (u8 __user *)dst,
        ret, "b", "b", "iq", 1);
+ __uaccess_end();
  return ret;
  }
  case 2: {
  u16 tmp;
+ __uaccess_begin();
  __get_user_asm(tmp, (u16 __user *)src,
        ret, "w", "w", "=r", 2);
  if (likely(!ret))
  __put_user_asm(tmp, (u16 __user *)dst,
        ret, "w", "w", "ir", 2);
+ __uaccess_end();
  return ret;
  }
 
  case 4: {
  u32 tmp;
+ __uaccess_begin();
  __get_user_asm(tmp, (u32 __user *)src,
        ret, "l", "k", "=r", 4);
  if (likely(!ret))
  __put_user_asm(tmp, (u32 __user *)dst,
        ret, "l", "k", "ir", 4);
+ __uaccess_end();
  return ret;
  }
  case 8: {
  u64 tmp;
+ __uaccess_begin();
  __get_user_asm(tmp, (u64 __user *)src,
        ret, "q", "", "=r", 8);
  if (likely(!ret))
  __put_user_asm(tmp, (u64 __user *)dst,
        ret, "q", "", "er", 8);
+ __uaccess_end();
  return ret;
  }
  default:
--
2.17.1


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[SRU][Xenial][PATCH 2/5] x86: fix SMAP in 32-bit environments

Juerg Haefliger
In reply to this post by Juerg Haefliger
From: Linus Torvalds <[hidden email]>

BugLink: https://bugs.launchpad.net/bugs/1775137

In commit 11f1a4b9755f ("x86: reorganize SMAP handling in user space
accesses") I changed how the stac/clac instructions were generated
around the user space accesses, which then made it possible to do
batched accesses efficiently for user string copies etc.

However, in doing so, I completely spaced out, and didn't even think
about the 32-bit case.  And nobody really even seemed to notice, because
SMAP doesn't even exist until modern Skylake processors, and you'd have
to be crazy to run 32-bit kernels on a modern CPU.

Which brings us to Andy Lutomirski.

He actually tested the 32-bit kernel on new hardware, and noticed that
it doesn't work.  My bad.  The trivial fix is to add the required
uaccess begin/end markers around the raw accesses in <asm/uaccess_32.h>.

I feel a bit bad about this patch, just because that header file really
should be cleaned up to avoid all the duplicated code in it, and this
commit just expands on the problem.  But this just fixes the bug without
any bigger cleanup surgery.

Reported-and-tested-by: Andy Lutomirski <[hidden email]>
Signed-off-by: Linus Torvalds <[hidden email]>
(cherry picked from commit de9e478b9d49f3a0214310d921450cf5bb4a21e6)
Signed-off-by: Juerg Haefliger <[hidden email]>
---
 arch/x86/include/asm/uaccess_32.h | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h
index f5dcb5204dcd..3fe0eac59462 100644
--- a/arch/x86/include/asm/uaccess_32.h
+++ b/arch/x86/include/asm/uaccess_32.h
@@ -48,20 +48,28 @@ __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
 
  switch (n) {
  case 1:
+ __uaccess_begin();
  __put_user_size(*(u8 *)from, (u8 __user *)to,
  1, ret, 1);
+ __uaccess_end();
  return ret;
  case 2:
+ __uaccess_begin();
  __put_user_size(*(u16 *)from, (u16 __user *)to,
  2, ret, 2);
+ __uaccess_end();
  return ret;
  case 4:
+ __uaccess_begin();
  __put_user_size(*(u32 *)from, (u32 __user *)to,
  4, ret, 4);
+ __uaccess_end();
  return ret;
  case 8:
+ __uaccess_begin();
  __put_user_size(*(u64 *)from, (u64 __user *)to,
  8, ret, 8);
+ __uaccess_end();
  return ret;
  }
  }
@@ -103,13 +111,19 @@ __copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
 
  switch (n) {
  case 1:
+ __uaccess_begin();
  __get_user_size(*(u8 *)to, from, 1, ret, 1);
+ __uaccess_end();
  return ret;
  case 2:
+ __uaccess_begin();
  __get_user_size(*(u16 *)to, from, 2, ret, 2);
+ __uaccess_end();
  return ret;
  case 4:
+ __uaccess_begin();
  __get_user_size(*(u32 *)to, from, 4, ret, 4);
+ __uaccess_end();
  return ret;
  }
  }
@@ -148,13 +162,19 @@ __copy_from_user(void *to, const void __user *from, unsigned long n)
 
  switch (n) {
  case 1:
+ __uaccess_begin();
  __get_user_size(*(u8 *)to, from, 1, ret, 1);
+ __uaccess_end();
  return ret;
  case 2:
+ __uaccess_begin();
  __get_user_size(*(u16 *)to, from, 2, ret, 2);
+ __uaccess_end();
  return ret;
  case 4:
+ __uaccess_begin();
  __get_user_size(*(u32 *)to, from, 4, ret, 4);
+ __uaccess_end();
  return ret;
  }
  }
@@ -170,13 +190,19 @@ static __always_inline unsigned long __copy_from_user_nocache(void *to,
 
  switch (n) {
  case 1:
+ __uaccess_begin();
  __get_user_size(*(u8 *)to, from, 1, ret, 1);
+ __uaccess_end();
  return ret;
  case 2:
+ __uaccess_begin();
  __get_user_size(*(u16 *)to, from, 2, ret, 2);
+ __uaccess_end();
  return ret;
  case 4:
+ __uaccess_begin();
  __get_user_size(*(u32 *)to, from, 4, ret, 4);
+ __uaccess_end();
  return ret;
  }
  }
--
2.17.1


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[SRU][Xenial][PATCH 3/5] x86: Introduce __uaccess_begin_nospec() and uaccess_try_nospec

Juerg Haefliger
In reply to this post by Juerg Haefliger
From: Dan Williams <[hidden email]>

BugLink: https://bugs.launchpad.net/bugs/1775137

For __get_user() paths, do not allow the kernel to speculate on the value
of a user controlled pointer. In addition to the 'stac' instruction for
Supervisor Mode Access Protection (SMAP), a barrier_nospec() causes the
access_ok() result to resolve in the pipeline before the CPU might take any
speculative action on the pointer value. Given the cost of 'stac' the
speculation barrier is placed after 'stac' to hopefully overlap the cost of
disabling SMAP with the cost of flushing the instruction pipeline.

Since __get_user is a major kernel interface that deals with user
controlled pointers, the __uaccess_begin_nospec() mechanism will prevent
speculative execution past an access_ok() permission check. While
speculative execution past access_ok() is not enough to lead to a kernel
memory leak, it is a necessary precondition.

To be clear, __uaccess_begin_nospec() is addressing a class of potential
problems near __get_user() usages.

Note, that while the barrier_nospec() in __uaccess_begin_nospec() is used
to protect __get_user(), pointer masking similar to array_index_nospec()
will be used for get_user() since it incorporates a bounds check near the
usage.

uaccess_try_nospec provides the same mechanism for get_user_try.

No functional changes.

Suggested-by: Linus Torvalds <[hidden email]>
Suggested-by: Andi Kleen <[hidden email]>
Suggested-by: Ingo Molnar <[hidden email]>
Signed-off-by: Dan Williams <[hidden email]>
Signed-off-by: Thomas Gleixner <[hidden email]>
Cc: [hidden email]
Cc: Tom Lendacky <[hidden email]>
Cc: Kees Cook <[hidden email]>
Cc: [hidden email]
Cc: [hidden email]
Cc: Al Viro <[hidden email]>
Cc: [hidden email]
Link: https://lkml.kernel.org/r/151727415922.33451.5796614273104346583.stgit@...

(backported from commit b3bbfb3fb5d25776b8e3f361d2eedaabb0b496cd)
[juergh: Use current_thread_info().]
Signed-off-by: Juerg Haefliger <[hidden email]>
---
 arch/x86/include/asm/uaccess.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index e93a69f9a225..4b50bc52ea3e 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -146,6 +146,11 @@ extern int __get_user_bad(void);
 
 #define __uaccess_begin() stac()
 #define __uaccess_end()   clac()
+#define __uaccess_begin_nospec() \
+({ \
+ stac(); \
+ barrier_nospec(); \
+})
 
 /*
  * This is a type: either unsigned long, if the argument fits into
@@ -473,6 +478,10 @@ struct __large_struct { unsigned long buf[100]; };
  __uaccess_begin(); \
  barrier();
 
+#define uaccess_try_nospec do { \
+ current_thread_info()->uaccess_err = 0; \
+ __uaccess_begin_nospec(); \
+
 #define uaccess_catch(err) \
  __uaccess_end(); \
  (err) |= (current_thread_info()->uaccess_err ? -EFAULT : 0); \
--
2.17.1


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[SRU][Xenial][PATCH 4/5] x86/usercopy: Replace open coded stac/clac with __uaccess_{begin, end}

Juerg Haefliger
In reply to this post by Juerg Haefliger
From: Dan Williams <[hidden email]>

BugLink: https://bugs.launchpad.net/bugs/1775137

In preparation for converting some __uaccess_begin() instances to
__uacess_begin_nospec(), make sure all 'from user' uaccess paths are
using the _begin(), _end() helpers rather than open-coded stac() and
clac().

No functional changes.

Suggested-by: Ingo Molnar <[hidden email]>
Signed-off-by: Dan Williams <[hidden email]>
Signed-off-by: Thomas Gleixner <[hidden email]>
Cc: [hidden email]
Cc: Tom Lendacky <[hidden email]>
Cc: Kees Cook <[hidden email]>
Cc: [hidden email]
Cc: [hidden email]
Cc: Al Viro <[hidden email]>
Cc: [hidden email]
Cc: [hidden email]
Link: https://lkml.kernel.org/r/151727416438.33451.17309465232057176966.stgit@...

(backported from commit b5c4ae4f35325d520b230bab6eb3310613b72ac1)
[juergh:
 - Replaced some more clac/stac with __uaccess_begin/end.
 - Adjusted context.]
Signed-off-by: Juerg Haefliger <[hidden email]>
---
 arch/x86/lib/usercopy_32.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/x86/lib/usercopy_32.c b/arch/x86/lib/usercopy_32.c
index 91d93b95bd86..5755942f5eb2 100644
--- a/arch/x86/lib/usercopy_32.c
+++ b/arch/x86/lib/usercopy_32.c
@@ -570,12 +570,12 @@ do { \
 unsigned long __copy_to_user_ll(void __user *to, const void *from,
  unsigned long n)
 {
- stac();
+ __uaccess_begin();
  if (movsl_is_ok(to, from, n))
  __copy_user(to, from, n);
  else
  n = __copy_user_intel(to, from, n);
- clac();
+ __uaccess_end();
  return n;
 }
 EXPORT_SYMBOL(__copy_to_user_ll);
@@ -583,12 +583,12 @@ EXPORT_SYMBOL(__copy_to_user_ll);
 unsigned long __copy_from_user_ll(void *to, const void __user *from,
  unsigned long n)
 {
- stac();
+ __uaccess_begin();
  if (movsl_is_ok(to, from, n))
  __copy_user_zeroing(to, from, n);
  else
  n = __copy_user_zeroing_intel(to, from, n);
- clac();
+ __uaccess_end();
  return n;
 }
 EXPORT_SYMBOL(__copy_from_user_ll);
@@ -596,13 +596,13 @@ EXPORT_SYMBOL(__copy_from_user_ll);
 unsigned long __copy_from_user_ll_nozero(void *to, const void __user *from,
  unsigned long n)
 {
- stac();
+ __uaccess_begin();
  if (movsl_is_ok(to, from, n))
  __copy_user(to, from, n);
  else
  n = __copy_user_intel((void __user *)to,
       (const void *)from, n);
- clac();
+ __uaccess_end();
  return n;
 }
 EXPORT_SYMBOL(__copy_from_user_ll_nozero);
@@ -610,7 +610,7 @@ EXPORT_SYMBOL(__copy_from_user_ll_nozero);
 unsigned long __copy_from_user_ll_nocache(void *to, const void __user *from,
  unsigned long n)
 {
- stac();
+ __uaccess_begin();
 #ifdef CONFIG_X86_INTEL_USERCOPY
  if (n > 64 && cpu_has_xmm2)
  n = __copy_user_zeroing_intel_nocache(to, from, n);
@@ -619,7 +619,7 @@ unsigned long __copy_from_user_ll_nocache(void *to, const void __user *from,
 #else
  __copy_user_zeroing(to, from, n);
 #endif
- clac();
+ __uaccess_end();
  return n;
 }
 EXPORT_SYMBOL(__copy_from_user_ll_nocache);
@@ -627,7 +627,7 @@ EXPORT_SYMBOL(__copy_from_user_ll_nocache);
 unsigned long __copy_from_user_ll_nocache_nozero(void *to, const void __user *from,
  unsigned long n)
 {
- stac();
+ __uaccess_begin();
 #ifdef CONFIG_X86_INTEL_USERCOPY
  if (n > 64 && cpu_has_xmm2)
  n = __copy_user_intel_nocache(to, from, n);
@@ -636,7 +636,7 @@ unsigned long __copy_from_user_ll_nocache_nozero(void *to, const void __user *fr
 #else
  __copy_user(to, from, n);
 #endif
- clac();
+ __uaccess_end();
  return n;
 }
 EXPORT_SYMBOL(__copy_from_user_ll_nocache_nozero);
--
2.17.1


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[SRU][Xenial][PATCH 5/5] x86/uaccess: Use __uaccess_begin_nospec() and uaccess_try_nospec

Juerg Haefliger
In reply to this post by Juerg Haefliger
From: Dan Williams <[hidden email]>

BugLink: https://bugs.launchpad.net/bugs/1775137

Quoting Linus:

    I do think that it would be a good idea to very expressly document
    the fact that it's not that the user access itself is unsafe. I do
    agree that things like "get_user()" want to be protected, but not
    because of any direct bugs or problems with get_user() and friends,
    but simply because get_user() is an excellent source of a pointer
    that is obviously controlled from a potentially attacking user
    space. So it's a prime candidate for then finding _subsequent_
    accesses that can then be used to perturb the cache.

__uaccess_begin_nospec() covers __get_user() and copy_from_iter() where the
limit check is far away from the user pointer de-reference. In those cases
a barrier_nospec() prevents speculation with a potential pointer to
privileged memory. uaccess_try_nospec covers get_user_try.

Suggested-by: Linus Torvalds <[hidden email]>
Suggested-by: Andi Kleen <[hidden email]>
Signed-off-by: Dan Williams <[hidden email]>
Signed-off-by: Thomas Gleixner <[hidden email]>
Cc: [hidden email]
Cc: Kees Cook <[hidden email]>
Cc: [hidden email]
Cc: [hidden email]
Cc: Al Viro <[hidden email]>
Cc: [hidden email]
Link: https://lkml.kernel.org/r/151727416953.33451.10508284228526170604.stgit@...

(backported from commit 304ec1b050310548db33063e567123fae8fd0301)
[juergh:
 - Converted additional copy_from_user functions to use
   __uaccess_begin_nospec().
 - Don't use __uaccess_begin_nospec() in __copy_to_user_ll() in
   arch/x86/lib/usercopy_32.c.
 - Adjusted context.]
Signed-off-by: Juerg Haefliger <[hidden email]>
---
 arch/x86/include/asm/uaccess.h    |  6 +++---
 arch/x86/include/asm/uaccess_32.h | 18 +++++++++---------
 arch/x86/include/asm/uaccess_64.h | 20 ++++++++++----------
 arch/x86/lib/usercopy_32.c        |  8 ++++----
 4 files changed, 26 insertions(+), 26 deletions(-)

diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index 4b50bc52ea3e..6f8eadf0681f 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -436,7 +436,7 @@ do { \
 ({ \
  int __gu_err; \
  unsigned long __gu_val; \
- __uaccess_begin(); \
+ __uaccess_begin_nospec(); \
  __get_user_size(__gu_val, (ptr), (size), __gu_err, -EFAULT); \
  __uaccess_end(); \
  (x) = (__force __typeof__(*(ptr)))__gu_val; \
@@ -546,7 +546,7 @@ struct __large_struct { unsigned long buf[100]; };
  * get_user_ex(...);
  * } get_user_catch(err)
  */
-#define get_user_try uaccess_try
+#define get_user_try uaccess_try_nospec
 #define get_user_catch(err) uaccess_catch(err)
 
 #define get_user_ex(x, ptr) do { \
@@ -581,7 +581,7 @@ extern void __cmpxchg_wrong_size(void)
  __typeof__(ptr) __uval = (uval); \
  __typeof__(*(ptr)) __old = (old); \
  __typeof__(*(ptr)) __new = (new); \
- __uaccess_begin(); \
+ __uaccess_begin_nospec(); \
  switch (size) { \
  case 1: \
  { \
diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h
index 3fe0eac59462..db04b2cca8b8 100644
--- a/arch/x86/include/asm/uaccess_32.h
+++ b/arch/x86/include/asm/uaccess_32.h
@@ -111,17 +111,17 @@ __copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
 
  switch (n) {
  case 1:
- __uaccess_begin();
+ __uaccess_begin_nospec();
  __get_user_size(*(u8 *)to, from, 1, ret, 1);
  __uaccess_end();
  return ret;
  case 2:
- __uaccess_begin();
+ __uaccess_begin_nospec();
  __get_user_size(*(u16 *)to, from, 2, ret, 2);
  __uaccess_end();
  return ret;
  case 4:
- __uaccess_begin();
+ __uaccess_begin_nospec();
  __get_user_size(*(u32 *)to, from, 4, ret, 4);
  __uaccess_end();
  return ret;
@@ -162,17 +162,17 @@ __copy_from_user(void *to, const void __user *from, unsigned long n)
 
  switch (n) {
  case 1:
- __uaccess_begin();
+ __uaccess_begin_nospec();
  __get_user_size(*(u8 *)to, from, 1, ret, 1);
  __uaccess_end();
  return ret;
  case 2:
- __uaccess_begin();
+ __uaccess_begin_nospec();
  __get_user_size(*(u16 *)to, from, 2, ret, 2);
  __uaccess_end();
  return ret;
  case 4:
- __uaccess_begin();
+ __uaccess_begin_nospec();
  __get_user_size(*(u32 *)to, from, 4, ret, 4);
  __uaccess_end();
  return ret;
@@ -190,17 +190,17 @@ static __always_inline unsigned long __copy_from_user_nocache(void *to,
 
  switch (n) {
  case 1:
- __uaccess_begin();
+ __uaccess_begin_nospec();
  __get_user_size(*(u8 *)to, from, 1, ret, 1);
  __uaccess_end();
  return ret;
  case 2:
- __uaccess_begin();
+ __uaccess_begin_nospec();
  __get_user_size(*(u16 *)to, from, 2, ret, 2);
  __uaccess_end();
  return ret;
  case 4:
- __uaccess_begin();
+ __uaccess_begin_nospec();
  __get_user_size(*(u32 *)to, from, 4, ret, 4);
  __uaccess_end();
  return ret;
diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
index 307698688fa1..dc2d00e7ced3 100644
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -57,31 +57,31 @@ int __copy_from_user_nocheck(void *dst, const void __user *src, unsigned size)
  return copy_user_generic(dst, (__force void *)src, size);
  switch (size) {
  case 1:
- __uaccess_begin();
+ __uaccess_begin_nospec();
  __get_user_asm(*(u8 *)dst, (u8 __user *)src,
       ret, "b", "b", "=q", 1);
  __uaccess_end();
  return ret;
  case 2:
- __uaccess_begin();
+ __uaccess_begin_nospec();
  __get_user_asm(*(u16 *)dst, (u16 __user *)src,
       ret, "w", "w", "=r", 2);
  __uaccess_end();
  return ret;
  case 4:
- __uaccess_begin();
+ __uaccess_begin_nospec();
  __get_user_asm(*(u32 *)dst, (u32 __user *)src,
       ret, "l", "k", "=r", 4);
  __uaccess_end();
  return ret;
  case 8:
- __uaccess_begin();
+ __uaccess_begin_nospec();
  __get_user_asm(*(u64 *)dst, (u64 __user *)src,
       ret, "q", "", "=r", 8);
  __uaccess_end();
  return ret;
  case 10:
- __uaccess_begin();
+ __uaccess_begin_nospec();
  __get_user_asm(*(u64 *)dst, (u64 __user *)src,
        ret, "q", "", "=r", 10);
  if (likely(!ret))
@@ -91,7 +91,7 @@ int __copy_from_user_nocheck(void *dst, const void __user *src, unsigned size)
  __uaccess_end();
  return ret;
  case 16:
- __uaccess_begin();
+ __uaccess_begin_nospec();
  __get_user_asm(*(u64 *)dst, (u64 __user *)src,
        ret, "q", "", "=r", 16);
  if (likely(!ret))
@@ -190,7 +190,7 @@ int __copy_in_user(void __user *dst, const void __user *src, unsigned size)
  switch (size) {
  case 1: {
  u8 tmp;
- __uaccess_begin();
+ __uaccess_begin_nospec();
  __get_user_asm(tmp, (u8 __user *)src,
        ret, "b", "b", "=q", 1);
  if (likely(!ret))
@@ -201,7 +201,7 @@ int __copy_in_user(void __user *dst, const void __user *src, unsigned size)
  }
  case 2: {
  u16 tmp;
- __uaccess_begin();
+ __uaccess_begin_nospec();
  __get_user_asm(tmp, (u16 __user *)src,
        ret, "w", "w", "=r", 2);
  if (likely(!ret))
@@ -213,7 +213,7 @@ int __copy_in_user(void __user *dst, const void __user *src, unsigned size)
 
  case 4: {
  u32 tmp;
- __uaccess_begin();
+ __uaccess_begin_nospec();
  __get_user_asm(tmp, (u32 __user *)src,
        ret, "l", "k", "=r", 4);
  if (likely(!ret))
@@ -224,7 +224,7 @@ int __copy_in_user(void __user *dst, const void __user *src, unsigned size)
  }
  case 8: {
  u64 tmp;
- __uaccess_begin();
+ __uaccess_begin_nospec();
  __get_user_asm(tmp, (u64 __user *)src,
        ret, "q", "", "=r", 8);
  if (likely(!ret))
diff --git a/arch/x86/lib/usercopy_32.c b/arch/x86/lib/usercopy_32.c
index 5755942f5eb2..79e5616e3b28 100644
--- a/arch/x86/lib/usercopy_32.c
+++ b/arch/x86/lib/usercopy_32.c
@@ -583,7 +583,7 @@ EXPORT_SYMBOL(__copy_to_user_ll);
 unsigned long __copy_from_user_ll(void *to, const void __user *from,
  unsigned long n)
 {
- __uaccess_begin();
+ __uaccess_begin_nospec();
  if (movsl_is_ok(to, from, n))
  __copy_user_zeroing(to, from, n);
  else
@@ -596,7 +596,7 @@ EXPORT_SYMBOL(__copy_from_user_ll);
 unsigned long __copy_from_user_ll_nozero(void *to, const void __user *from,
  unsigned long n)
 {
- __uaccess_begin();
+ __uaccess_begin_nospec();
  if (movsl_is_ok(to, from, n))
  __copy_user(to, from, n);
  else
@@ -610,7 +610,7 @@ EXPORT_SYMBOL(__copy_from_user_ll_nozero);
 unsigned long __copy_from_user_ll_nocache(void *to, const void __user *from,
  unsigned long n)
 {
- __uaccess_begin();
+ __uaccess_begin_nospec();
 #ifdef CONFIG_X86_INTEL_USERCOPY
  if (n > 64 && cpu_has_xmm2)
  n = __copy_user_zeroing_intel_nocache(to, from, n);
@@ -627,7 +627,7 @@ EXPORT_SYMBOL(__copy_from_user_ll_nocache);
 unsigned long __copy_from_user_ll_nocache_nozero(void *to, const void __user *from,
  unsigned long n)
 {
- __uaccess_begin();
+ __uaccess_begin_nospec();
 #ifdef CONFIG_X86_INTEL_USERCOPY
  if (n > 64 && cpu_has_xmm2)
  n = __copy_user_intel_nocache(to, from, n);
--
2.17.1


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

ACK/cmnt: [SRU][Xenial][PATCH 0/5] Prevent speculation on user controlled pointer (LP #1775137)

Stefan Bader-2
In reply to this post by Juerg Haefliger
On 06.06.2018 07:20, Juerg Haefliger wrote:

> BugLink: https://bugs.launchpad.net/bugs/1775137
>
> This patchset adds the missing Spectre v1 mitigation for speculating on user controlled pointers.
>
> == SRU Justification ==
> Upstream's Spectre v1 mitigation prevents speculation on a user controlled pointer. This part of the Spectre v1 patchset was never backported to 4.4 (for unknown reasons) so Xenial/Trusty/Precise are lacking it as well. All the other stable upstream kernels include it, so add it to our older kernels.
>
> == Fix ==
> Backport the following patches:
> x86/uaccess: Use __uaccess_begin_nospec() and uaccess_try_nospec
> x86/usercopy: Replace open coded stac/clac with __uaccess_{begin, end}
> x86: Introduce __uaccess_begin_nospec() and uaccess_try_nospec
>
> == Regression Potential ==
> Low. Patches have been in upstream (and other distro kernels) for quite a while now and the changes only introduce a barrier on copy_from_user operations.
>
> == Test Case ==
> TBD.
>
> Signed-off-by: Juerg Haefliger <[hidden email]>
>
>
> Dan Williams (3):
>   x86: Introduce __uaccess_begin_nospec() and uaccess_try_nospec
>   x86/usercopy: Replace open coded stac/clac with __uaccess_{begin, end}
>   x86/uaccess: Use __uaccess_begin_nospec() and uaccess_try_nospec
>
> Linus Torvalds (2):
>   x86: reorganize SMAP handling in user space accesses
>   x86: fix SMAP in 32-bit environments
>
>  arch/x86/include/asm/uaccess.h    | 64 ++++++++++++++-------
>  arch/x86/include/asm/uaccess_32.h | 26 +++++++++
>  arch/x86/include/asm/uaccess_64.h | 94 ++++++++++++++++++++++---------
>  arch/x86/lib/usercopy_32.c        | 20 +++----
>  4 files changed, 147 insertions(+), 57 deletions(-)
>
Looking at the patches they seem to match what they claim to do. I am just
wondering whether there would be a slightly better way to point out backport
decisions like that "don't use <something> in <function>" in the last patch.
Maybe that could be a comment in the associated bug report?

But anyway,

Acked-by: Stefan Bader <[hidden email]>



--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team

signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

ACK: [SRU][Xenial][PATCH 0/5] Prevent speculation on user controlled pointer (LP #1775137)

Kleber Sacilotto de Souza
In reply to this post by Juerg Haefliger
On 06/06/18 07:20, Juerg Haefliger wrote:

> BugLink: https://bugs.launchpad.net/bugs/1775137
>
> This patchset adds the missing Spectre v1 mitigation for speculating on user controlled pointers.
>
> == SRU Justification ==
> Upstream's Spectre v1 mitigation prevents speculation on a user controlled pointer. This part of the Spectre v1 patchset was never backported to 4.4 (for unknown reasons) so Xenial/Trusty/Precise are lacking it as well. All the other stable upstream kernels include it, so add it to our older kernels.
>
> == Fix ==
> Backport the following patches:
> x86/uaccess: Use __uaccess_begin_nospec() and uaccess_try_nospec
> x86/usercopy: Replace open coded stac/clac with __uaccess_{begin, end}
> x86: Introduce __uaccess_begin_nospec() and uaccess_try_nospec
>
> == Regression Potential ==
> Low. Patches have been in upstream (and other distro kernels) for quite a while now and the changes only introduce a barrier on copy_from_user operations.
>
> == Test Case ==
> TBD.
>
> Signed-off-by: Juerg Haefliger <[hidden email]>
>
>
> Dan Williams (3):
>   x86: Introduce __uaccess_begin_nospec() and uaccess_try_nospec
>   x86/usercopy: Replace open coded stac/clac with __uaccess_{begin, end}
>   x86/uaccess: Use __uaccess_begin_nospec() and uaccess_try_nospec
>
> Linus Torvalds (2):
>   x86: reorganize SMAP handling in user space accesses
>   x86: fix SMAP in 32-bit environments
>
>  arch/x86/include/asm/uaccess.h    | 64 ++++++++++++++-------
>  arch/x86/include/asm/uaccess_32.h | 26 +++++++++
>  arch/x86/include/asm/uaccess_64.h | 94 ++++++++++++++++++++++---------
>  arch/x86/lib/usercopy_32.c        | 20 +++----
>  4 files changed, 147 insertions(+), 57 deletions(-)
>

Acked-by: Kleber Sacilotto de Souza <[hidden email]>

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

Re: [SRU][Xenial][PATCH 0/5] Prevent speculation on user controlled pointer (LP #1775137)

Juerg Haefliger
In reply to this post by Juerg Haefliger
Please hold off applying this. It needs to come after the below patchset
which (amongst other things) introduces the barrier_nospec macro:

[SRU][Xenial][PULL] Update to upstream's implementation of Spectre v1
mitigation (LP: #1774181)

...Juerg


On 06/06/2018 04:20 PM, Juerg Haefliger wrote:

> BugLink: https://bugs.launchpad.net/bugs/1775137
>
> This patchset adds the missing Spectre v1 mitigation for speculating on user controlled pointers.
>
> == SRU Justification ==
> Upstream's Spectre v1 mitigation prevents speculation on a user controlled pointer. This part of the Spectre v1 patchset was never backported to 4.4 (for unknown reasons) so Xenial/Trusty/Precise are lacking it as well. All the other stable upstream kernels include it, so add it to our older kernels.
>
> == Fix ==
> Backport the following patches:
> x86/uaccess: Use __uaccess_begin_nospec() and uaccess_try_nospec
> x86/usercopy: Replace open coded stac/clac with __uaccess_{begin, end}
> x86: Introduce __uaccess_begin_nospec() and uaccess_try_nospec
>
> == Regression Potential ==
> Low. Patches have been in upstream (and other distro kernels) for quite a while now and the changes only introduce a barrier on copy_from_user operations.
>
> == Test Case ==
> TBD.
>
> Signed-off-by: Juerg Haefliger <[hidden email]>
>
>
> Dan Williams (3):
>   x86: Introduce __uaccess_begin_nospec() and uaccess_try_nospec
>   x86/usercopy: Replace open coded stac/clac with __uaccess_{begin, end}
>   x86/uaccess: Use __uaccess_begin_nospec() and uaccess_try_nospec
>
> Linus Torvalds (2):
>   x86: reorganize SMAP handling in user space accesses
>   x86: fix SMAP in 32-bit environments
>
>  arch/x86/include/asm/uaccess.h    | 64 ++++++++++++++-------
>  arch/x86/include/asm/uaccess_32.h | 26 +++++++++
>  arch/x86/include/asm/uaccess_64.h | 94 ++++++++++++++++++++++---------
>  arch/x86/lib/usercopy_32.c        | 20 +++----
>  4 files changed, 147 insertions(+), 57 deletions(-)
>


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

APPLIED: [SRU][Xenial][PATCH 0/5] Prevent speculation on user controlled pointer (LP #1775137)

Khaled Elmously
In reply to this post by Juerg Haefliger
Applied to xenial


On 2018-06-06 16:20:47 , Juerg Haefliger wrote:

> BugLink: https://bugs.launchpad.net/bugs/1775137
>
> This patchset adds the missing Spectre v1 mitigation for speculating on user controlled pointers.
>
> == SRU Justification ==
> Upstream's Spectre v1 mitigation prevents speculation on a user controlled pointer. This part of the Spectre v1 patchset was never backported to 4.4 (for unknown reasons) so Xenial/Trusty/Precise are lacking it as well. All the other stable upstream kernels include it, so add it to our older kernels.
>
> == Fix ==
> Backport the following patches:
> x86/uaccess: Use __uaccess_begin_nospec() and uaccess_try_nospec
> x86/usercopy: Replace open coded stac/clac with __uaccess_{begin, end}
> x86: Introduce __uaccess_begin_nospec() and uaccess_try_nospec
>
> == Regression Potential ==
> Low. Patches have been in upstream (and other distro kernels) for quite a while now and the changes only introduce a barrier on copy_from_user operations.
>
> == Test Case ==
> TBD.
>
> Signed-off-by: Juerg Haefliger <[hidden email]>
>
>
> Dan Williams (3):
>   x86: Introduce __uaccess_begin_nospec() and uaccess_try_nospec
>   x86/usercopy: Replace open coded stac/clac with __uaccess_{begin, end}
>   x86/uaccess: Use __uaccess_begin_nospec() and uaccess_try_nospec
>
> Linus Torvalds (2):
>   x86: reorganize SMAP handling in user space accesses
>   x86: fix SMAP in 32-bit environments
>
>  arch/x86/include/asm/uaccess.h    | 64 ++++++++++++++-------
>  arch/x86/include/asm/uaccess_32.h | 26 +++++++++
>  arch/x86/include/asm/uaccess_64.h | 94 ++++++++++++++++++++++---------
>  arch/x86/lib/usercopy_32.c        | 20 +++----
>  4 files changed, 147 insertions(+), 57 deletions(-)
>
> --
> 2.17.1
>
>
> --
> kernel-team mailing list
> [hidden email]
> https://lists.ubuntu.com/mailman/listinfo/kernel-team

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

Re: APPLIED: [SRU][Xenial][PATCH 0/5] Prevent speculation on user controlled pointer (LP #1775137)

Khaled Elmously
Juerg

I applied this patchset to Xenial, but the bug also affects trusty.

Was this supposed to have been applied to Trusty too? Is there another patchset for trusty coming?



On 2018-06-07 18:06:42 , Khaled Elmously wrote:

> Applied to xenial
>
>
> On 2018-06-06 16:20:47 , Juerg Haefliger wrote:
> > BugLink: https://bugs.launchpad.net/bugs/1775137
> >
> > This patchset adds the missing Spectre v1 mitigation for speculating on user controlled pointers.
> >
> > == SRU Justification ==
> > Upstream's Spectre v1 mitigation prevents speculation on a user controlled pointer. This part of the Spectre v1 patchset was never backported to 4.4 (for unknown reasons) so Xenial/Trusty/Precise are lacking it as well. All the other stable upstream kernels include it, so add it to our older kernels.
> >
> > == Fix ==
> > Backport the following patches:
> > x86/uaccess: Use __uaccess_begin_nospec() and uaccess_try_nospec
> > x86/usercopy: Replace open coded stac/clac with __uaccess_{begin, end}
> > x86: Introduce __uaccess_begin_nospec() and uaccess_try_nospec
> >
> > == Regression Potential ==
> > Low. Patches have been in upstream (and other distro kernels) for quite a while now and the changes only introduce a barrier on copy_from_user operations.
> >
> > == Test Case ==
> > TBD.
> >
> > Signed-off-by: Juerg Haefliger <[hidden email]>
> >
> >
> > Dan Williams (3):
> >   x86: Introduce __uaccess_begin_nospec() and uaccess_try_nospec
> >   x86/usercopy: Replace open coded stac/clac with __uaccess_{begin, end}
> >   x86/uaccess: Use __uaccess_begin_nospec() and uaccess_try_nospec
> >
> > Linus Torvalds (2):
> >   x86: reorganize SMAP handling in user space accesses
> >   x86: fix SMAP in 32-bit environments
> >
> >  arch/x86/include/asm/uaccess.h    | 64 ++++++++++++++-------
> >  arch/x86/include/asm/uaccess_32.h | 26 +++++++++
> >  arch/x86/include/asm/uaccess_64.h | 94 ++++++++++++++++++++++---------
> >  arch/x86/lib/usercopy_32.c        | 20 +++----
> >  4 files changed, 147 insertions(+), 57 deletions(-)
> >
> > --
> > 2.17.1
> >
> >
> > --
> > kernel-team mailing list
> > [hidden email]
> > https://lists.ubuntu.com/mailman/listinfo/kernel-team

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team