[SRU][Bionic][Cosmic][PATCH 0/1] powerpc/livepatch: Implement reliable stack tracing for the consistency model

classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

[SRU][Bionic][Cosmic][PATCH 0/1] powerpc/livepatch: Implement reliable stack tracing for the consistency model

Joseph Salisbury-3
BugLink: http://bugs.launchpad.net/bugs/1771844

== SRU Justification ==
Livepatch has a consistency model which is a hybrid of kGraft and kpatch:
it uses kGraft's per-task consistency and syscall barrier switching
combined with kpatch's stack trace switching. The current approach is
stack checking of sleeping tasks. If no affected functions are on the
stack of a given task, the task is patched. In most cases this will patch
most or all of the tasks on the first try. Otherwise, it'll keep trying
periodically. This patch implements the reliable stack tracing for
consistency model a.k.a HAVE_RELIABLE_STACKTRACE.

This will help in switching livepatching implementation to basic per-task
consistency model. It is the foundation, which will help us enable
security patches changing function or data semantics. This is the biggest
remaining piece needed on ppc64le to make livepatch more generally useful.

== Fix ==
df78d3f61480 ("powerpc/livepatch: Implement reliable stack tracing for the consistency model")

== Regression Potential ==
Low. Limited to powerpc.

== Test Case ==
A test kernel was built with this patch and tested by the original bug reporter.
The bug reporter states the test kernel resolved the bug.


Torsten Duwe (1):
  powerpc/livepatch: Implement reliable stack tracing for the
    consistency model

 arch/powerpc/Kconfig             |   1 +
 arch/powerpc/kernel/stacktrace.c | 119 ++++++++++++++++++++++++++++++-
 2 files changed, 119 insertions(+), 1 deletion(-)

--
2.17.0


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[SRU][Bionic][Cosmic][PATCH 1/1] powerpc/livepatch: Implement reliable stack tracing for the consistency model

Joseph Salisbury-3
From: Torsten Duwe <[hidden email]>

BugLink: http://bugs.launchpad.net/bugs/1771844

The "Power Architecture 64-Bit ELF V2 ABI" says in section 2.3.2.3:

[...] There are several rules that must be adhered to in order to ensure
reliable and consistent call chain backtracing:

* Before a function calls any other function, it shall establish its
  own stack frame, whose size shall be a multiple of 16 bytes.

 – In instances where a function’s prologue creates a stack frame, the
   back-chain word of the stack frame shall be updated atomically with
   the value of the stack pointer (r1) when a back chain is implemented.
   (This must be supported as default by all ELF V2 ABI-compliant
   environments.)
[...]
 – The function shall save the link register that contains its return
   address in the LR save doubleword of its caller’s stack frame before
   calling another function.

To me this sounds like the equivalent of HAVE_RELIABLE_STACKTRACE.
This patch may be unneccessarily limited to ppc64le, but OTOH the only
user of this flag so far is livepatching, which is only implemented on
PPCs with 64-LE, a.k.a. ELF ABI v2.

Feel free to add other ppc variants, but so far only ppc64le got tested.

This change also implements save_stack_trace_tsk_reliable() for ppc64le
that checks for the above conditions, where possible.

Signed-off-by: Torsten Duwe <[hidden email]>
Signed-off-by: Nicolai Stange <[hidden email]>
Acked-by: Josh Poimboeuf <[hidden email]>
Signed-off-by: Michael Ellerman <[hidden email]>
(cherry picked from linux-next commit df78d3f6148092d33a9a24c7a9cfac3d0220b484)
Signed-off-by: Joseph Salisbury <[hidden email]>
---
 arch/powerpc/Kconfig             |   1 +
 arch/powerpc/kernel/stacktrace.c | 119 ++++++++++++++++++++++++++++++-
 2 files changed, 119 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index c32a181a7cbb..54f1daf4f9e5 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -220,6 +220,7 @@ config PPC
  select HAVE_PERF_USER_STACK_DUMP
  select HAVE_RCU_TABLE_FREE if SMP
  select HAVE_REGS_AND_STACK_ACCESS_API
+ select HAVE_RELIABLE_STACKTRACE if PPC64 && CPU_LITTLE_ENDIAN
  select HAVE_SYSCALL_TRACEPOINTS
  select HAVE_VIRT_CPU_ACCOUNTING
  select HAVE_IRQ_TIME_ACCOUNTING
diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
index d534ed901538..26a50603177c 100644
--- a/arch/powerpc/kernel/stacktrace.c
+++ b/arch/powerpc/kernel/stacktrace.c
@@ -2,7 +2,7 @@
  * Stack trace utility
  *
  * Copyright 2008 Christoph Hellwig, IBM Corp.
- *
+ * Copyright 2018 SUSE Linux GmbH
  *
  *      This program is free software; you can redistribute it and/or
  *      modify it under the terms of the GNU General Public License
@@ -11,11 +11,16 @@
  */
 
 #include <linux/export.h>
+#include <linux/kallsyms.h>
+#include <linux/module.h>
 #include <linux/sched.h>
 #include <linux/sched/debug.h>
+#include <linux/sched/task_stack.h>
 #include <linux/stacktrace.h>
 #include <asm/ptrace.h>
 #include <asm/processor.h>
+#include <linux/ftrace.h>
+#include <asm/kprobes.h>
 
 /*
  * Save stack-backtrace addresses into a stack_trace buffer.
@@ -76,3 +81,115 @@ save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
  save_context_stack(trace, regs->gpr[1], current, 0);
 }
 EXPORT_SYMBOL_GPL(save_stack_trace_regs);
+
+#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE
+int
+save_stack_trace_tsk_reliable(struct task_struct *tsk,
+ struct stack_trace *trace)
+{
+ unsigned long sp;
+ unsigned long stack_page = (unsigned long)task_stack_page(tsk);
+ unsigned long stack_end;
+ int graph_idx = 0;
+
+ /*
+ * The last frame (unwinding first) may not yet have saved
+ * its LR onto the stack.
+ */
+ int firstframe = 1;
+
+ if (tsk == current)
+ sp = current_stack_pointer();
+ else
+ sp = tsk->thread.ksp;
+
+ stack_end = stack_page + THREAD_SIZE;
+ if (!is_idle_task(tsk)) {
+ /*
+ * For user tasks, this is the SP value loaded on
+ * kernel entry, see "PACAKSAVE(r13)" in _switch() and
+ * system_call_common()/EXCEPTION_PROLOG_COMMON().
+ *
+ * Likewise for non-swapper kernel threads,
+ * this also happens to be the top of the stack
+ * as setup by copy_thread().
+ *
+ * Note that stack backlinks are not properly setup by
+ * copy_thread() and thus, a forked task() will have
+ * an unreliable stack trace until it's been
+ * _switch()'ed to for the first time.
+ */
+ stack_end -= STACK_FRAME_OVERHEAD + sizeof(struct pt_regs);
+ } else {
+ /*
+ * idle tasks have a custom stack layout,
+ * c.f. cpu_idle_thread_init().
+ */
+ stack_end -= STACK_FRAME_OVERHEAD;
+ }
+
+ if (sp < stack_page + sizeof(struct thread_struct) ||
+    sp > stack_end - STACK_FRAME_MIN_SIZE) {
+ return 1;
+ }
+
+ for (;;) {
+ unsigned long *stack = (unsigned long *) sp;
+ unsigned long newsp, ip;
+
+ /* sanity check: ABI requires SP to be aligned 16 bytes. */
+ if (sp & 0xF)
+ return 1;
+
+ /* Mark stacktraces with exception frames as unreliable. */
+ if (sp <= stack_end - STACK_INT_FRAME_SIZE &&
+    stack[STACK_FRAME_MARKER] == STACK_FRAME_REGS_MARKER) {
+ return 1;
+ }
+
+ newsp = stack[0];
+ /* Stack grows downwards; unwinder may only go up. */
+ if (newsp <= sp)
+ return 1;
+
+ if (newsp != stack_end &&
+    newsp > stack_end - STACK_FRAME_MIN_SIZE) {
+ return 1; /* invalid backlink, too far up. */
+ }
+
+ /* Examine the saved LR: it must point into kernel code. */
+ ip = stack[STACK_FRAME_LR_SAVE];
+ if (!firstframe && !__kernel_text_address(ip))
+ return 1;
+ firstframe = 0;
+
+ /*
+ * FIXME: IMHO these tests do not belong in
+ * arch-dependent code, they are generic.
+ */
+ ip = ftrace_graph_ret_addr(tsk, &graph_idx, ip, NULL);
+
+ /*
+ * Mark stacktraces with kretprobed functions on them
+ * as unreliable.
+ */
+ if (ip == (unsigned long)kretprobe_trampoline)
+ return 1;
+
+ if (!trace->skip)
+ trace->entries[trace->nr_entries++] = ip;
+ else
+ trace->skip--;
+
+ if (newsp == stack_end)
+ break;
+
+ if (trace->nr_entries >= trace->max_entries)
+ return -E2BIG;
+
+ sp = newsp;
+ }
+ return 0;
+}
+EXPORT_SYMBOL_GPL(save_stack_trace_tsk_reliable);
+#endif /* CONFIG_HAVE_RELIABLE_STACKTRACE */
--
2.17.0


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

ACK: [SRU][Bionic][Cosmic][PATCH 1/1] powerpc/livepatch: Implement reliable stack tracing for the consistency model

Stefan Bader-2
On 23.05.2018 10:16, Joseph Salisbury wrote:

> From: Torsten Duwe <[hidden email]>
>
> BugLink: http://bugs.launchpad.net/bugs/1771844
>
> The "Power Architecture 64-Bit ELF V2 ABI" says in section 2.3.2.3:
>
> [...] There are several rules that must be adhered to in order to ensure
> reliable and consistent call chain backtracing:
>
> * Before a function calls any other function, it shall establish its
>   own stack frame, whose size shall be a multiple of 16 bytes.
>
>  – In instances where a function’s prologue creates a stack frame, the
>    back-chain word of the stack frame shall be updated atomically with
>    the value of the stack pointer (r1) when a back chain is implemented.
>    (This must be supported as default by all ELF V2 ABI-compliant
>    environments.)
> [...]
>  – The function shall save the link register that contains its return
>    address in the LR save doubleword of its caller’s stack frame before
>    calling another function.
>
> To me this sounds like the equivalent of HAVE_RELIABLE_STACKTRACE.
> This patch may be unneccessarily limited to ppc64le, but OTOH the only
> user of this flag so far is livepatching, which is only implemented on
> PPCs with 64-LE, a.k.a. ELF ABI v2.
>
> Feel free to add other ppc variants, but so far only ppc64le got tested.
>
> This change also implements save_stack_trace_tsk_reliable() for ppc64le
> that checks for the above conditions, where possible.
>
> Signed-off-by: Torsten Duwe <[hidden email]>
> Signed-off-by: Nicolai Stange <[hidden email]>
> Acked-by: Josh Poimboeuf <[hidden email]>
> Signed-off-by: Michael Ellerman <[hidden email]>
> (cherry picked from linux-next commit df78d3f6148092d33a9a24c7a9cfac3d0220b484)
> Signed-off-by: Joseph Salisbury <[hidden email]>
Acked-by: Stefan Bader <[hidden email]>

> ---
>  arch/powerpc/Kconfig             |   1 +
>  arch/powerpc/kernel/stacktrace.c | 119 ++++++++++++++++++++++++++++++-
>  2 files changed, 119 insertions(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index c32a181a7cbb..54f1daf4f9e5 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -220,6 +220,7 @@ config PPC
>   select HAVE_PERF_USER_STACK_DUMP
>   select HAVE_RCU_TABLE_FREE if SMP
>   select HAVE_REGS_AND_STACK_ACCESS_API
> + select HAVE_RELIABLE_STACKTRACE if PPC64 && CPU_LITTLE_ENDIAN
>   select HAVE_SYSCALL_TRACEPOINTS
>   select HAVE_VIRT_CPU_ACCOUNTING
>   select HAVE_IRQ_TIME_ACCOUNTING
> diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
> index d534ed901538..26a50603177c 100644
> --- a/arch/powerpc/kernel/stacktrace.c
> +++ b/arch/powerpc/kernel/stacktrace.c
> @@ -2,7 +2,7 @@
>   * Stack trace utility
>   *
>   * Copyright 2008 Christoph Hellwig, IBM Corp.
> - *
> + * Copyright 2018 SUSE Linux GmbH
>   *
>   *      This program is free software; you can redistribute it and/or
>   *      modify it under the terms of the GNU General Public License
> @@ -11,11 +11,16 @@
>   */
>  
>  #include <linux/export.h>
> +#include <linux/kallsyms.h>
> +#include <linux/module.h>
>  #include <linux/sched.h>
>  #include <linux/sched/debug.h>
> +#include <linux/sched/task_stack.h>
>  #include <linux/stacktrace.h>
>  #include <asm/ptrace.h>
>  #include <asm/processor.h>
> +#include <linux/ftrace.h>
> +#include <asm/kprobes.h>
>  
>  /*
>   * Save stack-backtrace addresses into a stack_trace buffer.
> @@ -76,3 +81,115 @@ save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
>   save_context_stack(trace, regs->gpr[1], current, 0);
>  }
>  EXPORT_SYMBOL_GPL(save_stack_trace_regs);
> +
> +#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE
> +int
> +save_stack_trace_tsk_reliable(struct task_struct *tsk,
> + struct stack_trace *trace)
> +{
> + unsigned long sp;
> + unsigned long stack_page = (unsigned long)task_stack_page(tsk);
> + unsigned long stack_end;
> + int graph_idx = 0;
> +
> + /*
> + * The last frame (unwinding first) may not yet have saved
> + * its LR onto the stack.
> + */
> + int firstframe = 1;
> +
> + if (tsk == current)
> + sp = current_stack_pointer();
> + else
> + sp = tsk->thread.ksp;
> +
> + stack_end = stack_page + THREAD_SIZE;
> + if (!is_idle_task(tsk)) {
> + /*
> + * For user tasks, this is the SP value loaded on
> + * kernel entry, see "PACAKSAVE(r13)" in _switch() and
> + * system_call_common()/EXCEPTION_PROLOG_COMMON().
> + *
> + * Likewise for non-swapper kernel threads,
> + * this also happens to be the top of the stack
> + * as setup by copy_thread().
> + *
> + * Note that stack backlinks are not properly setup by
> + * copy_thread() and thus, a forked task() will have
> + * an unreliable stack trace until it's been
> + * _switch()'ed to for the first time.
> + */
> + stack_end -= STACK_FRAME_OVERHEAD + sizeof(struct pt_regs);
> + } else {
> + /*
> + * idle tasks have a custom stack layout,
> + * c.f. cpu_idle_thread_init().
> + */
> + stack_end -= STACK_FRAME_OVERHEAD;
> + }
> +
> + if (sp < stack_page + sizeof(struct thread_struct) ||
> +    sp > stack_end - STACK_FRAME_MIN_SIZE) {
> + return 1;
> + }
> +
> + for (;;) {
> + unsigned long *stack = (unsigned long *) sp;
> + unsigned long newsp, ip;
> +
> + /* sanity check: ABI requires SP to be aligned 16 bytes. */
> + if (sp & 0xF)
> + return 1;
> +
> + /* Mark stacktraces with exception frames as unreliable. */
> + if (sp <= stack_end - STACK_INT_FRAME_SIZE &&
> +    stack[STACK_FRAME_MARKER] == STACK_FRAME_REGS_MARKER) {
> + return 1;
> + }
> +
> + newsp = stack[0];
> + /* Stack grows downwards; unwinder may only go up. */
> + if (newsp <= sp)
> + return 1;
> +
> + if (newsp != stack_end &&
> +    newsp > stack_end - STACK_FRAME_MIN_SIZE) {
> + return 1; /* invalid backlink, too far up. */
> + }
> +
> + /* Examine the saved LR: it must point into kernel code. */
> + ip = stack[STACK_FRAME_LR_SAVE];
> + if (!firstframe && !__kernel_text_address(ip))
> + return 1;
> + firstframe = 0;
> +
> + /*
> + * FIXME: IMHO these tests do not belong in
> + * arch-dependent code, they are generic.
> + */
> + ip = ftrace_graph_ret_addr(tsk, &graph_idx, ip, NULL);
> +
> + /*
> + * Mark stacktraces with kretprobed functions on them
> + * as unreliable.
> + */
> + if (ip == (unsigned long)kretprobe_trampoline)
> + return 1;
> +
> + if (!trace->skip)
> + trace->entries[trace->nr_entries++] = ip;
> + else
> + trace->skip--;
> +
> + if (newsp == stack_end)
> + break;
> +
> + if (trace->nr_entries >= trace->max_entries)
> + return -E2BIG;
> +
> + sp = newsp;
> + }
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(save_stack_trace_tsk_reliable);
> +#endif /* CONFIG_HAVE_RELIABLE_STACKTRACE */
>


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team

signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [SRU][Bionic][Cosmic][PATCH 1/1] powerpc/livepatch: Implement reliable stack tracing for the consistency model

Michael Ellerman-2
In reply to this post by Joseph Salisbury-3
Joseph Salisbury <[hidden email]> writes:

> diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
> index d534ed901538..26a50603177c 100644
> --- a/arch/powerpc/kernel/stacktrace.c
> +++ b/arch/powerpc/kernel/stacktrace.c
> @@ -76,3 +81,115 @@ save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
>   save_context_stack(trace, regs->gpr[1], current, 0);
>  }
>  EXPORT_SYMBOL_GPL(save_stack_trace_regs);
> +
> +#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE
> +int
> +save_stack_trace_tsk_reliable(struct task_struct *tsk,
> + struct stack_trace *trace)
> +{
...
> + /*
> + * Mark stacktraces with kretprobed functions on them
> + * as unreliable.
> + */
> + if (ip == (unsigned long)kretprobe_trampoline)
> + return 1;

You may also want:

5e3f0d15ae5f ("powerpc/livepatch: Fix build error with kprobes disabled.")

cheers

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

Re: [SRU][Bionic][Cosmic][PATCH 1/1] powerpc/livepatch: Implement reliable stack tracing for the consistency model

Joseph Salisbury-3
On 06/04/2018 10:11 PM, Michael Ellerman wrote:

> Joseph Salisbury <[hidden email]> writes:
>> diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
>> index d534ed901538..26a50603177c 100644
>> --- a/arch/powerpc/kernel/stacktrace.c
>> +++ b/arch/powerpc/kernel/stacktrace.c
>> @@ -76,3 +81,115 @@ save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
>>   save_context_stack(trace, regs->gpr[1], current, 0);
>>  }
>>  EXPORT_SYMBOL_GPL(save_stack_trace_regs);
>> +
>> +#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE
>> +int
>> +save_stack_trace_tsk_reliable(struct task_struct *tsk,
>> + struct stack_trace *trace)
>> +{
> ...
>> + /*
>> + * Mark stacktraces with kretprobed functions on them
>> + * as unreliable.
>> + */
>> + if (ip == (unsigned long)kretprobe_trampoline)
>> + return 1;
> You may also want:
>
> 5e3f0d15ae5f ("powerpc/livepatch: Fix build error with kprobes disabled.")
>
> cheers

Thanks for the feedback, Michael.  Would it be possible for you to post
this comment to the bug?  That way we can get feedback from the original
bug reporter and IBM.


Thanks,


Joe



--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

ACK: [SRU][Bionic][Cosmic][PATCH 1/1] powerpc/livepatch: Implement reliable stack tracing for the consistency model

Kleber Souza
In reply to this post by Joseph Salisbury-3
On 05/23/18 10:16, Joseph Salisbury wrote:

> From: Torsten Duwe <[hidden email]>
>
> BugLink: http://bugs.launchpad.net/bugs/1771844
>
> The "Power Architecture 64-Bit ELF V2 ABI" says in section 2.3.2.3:
>
> [...] There are several rules that must be adhered to in order to ensure
> reliable and consistent call chain backtracing:
>
> * Before a function calls any other function, it shall establish its
>   own stack frame, whose size shall be a multiple of 16 bytes.
>
>  – In instances where a function’s prologue creates a stack frame, the
>    back-chain word of the stack frame shall be updated atomically with
>    the value of the stack pointer (r1) when a back chain is implemented.
>    (This must be supported as default by all ELF V2 ABI-compliant
>    environments.)
> [...]
>  – The function shall save the link register that contains its return
>    address in the LR save doubleword of its caller’s stack frame before
>    calling another function.
>
> To me this sounds like the equivalent of HAVE_RELIABLE_STACKTRACE.
> This patch may be unneccessarily limited to ppc64le, but OTOH the only
> user of this flag so far is livepatching, which is only implemented on
> PPCs with 64-LE, a.k.a. ELF ABI v2.
>
> Feel free to add other ppc variants, but so far only ppc64le got tested.
>
> This change also implements save_stack_trace_tsk_reliable() for ppc64le
> that checks for the above conditions, where possible.
>
> Signed-off-by: Torsten Duwe <[hidden email]>
> Signed-off-by: Nicolai Stange <[hidden email]>
> Acked-by: Josh Poimboeuf <[hidden email]>
> Signed-off-by: Michael Ellerman <[hidden email]>
> (cherry picked from linux-next commit df78d3f6148092d33a9a24c7a9cfac3d0220b484)
> Signed-off-by: Joseph Salisbury <[hidden email]>

Acked-by: Kleber Sacilotto de Souza <[hidden email]>

> ---
>  arch/powerpc/Kconfig             |   1 +
>  arch/powerpc/kernel/stacktrace.c | 119 ++++++++++++++++++++++++++++++-
>  2 files changed, 119 insertions(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index c32a181a7cbb..54f1daf4f9e5 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -220,6 +220,7 @@ config PPC
>   select HAVE_PERF_USER_STACK_DUMP
>   select HAVE_RCU_TABLE_FREE if SMP
>   select HAVE_REGS_AND_STACK_ACCESS_API
> + select HAVE_RELIABLE_STACKTRACE if PPC64 && CPU_LITTLE_ENDIAN
>   select HAVE_SYSCALL_TRACEPOINTS
>   select HAVE_VIRT_CPU_ACCOUNTING
>   select HAVE_IRQ_TIME_ACCOUNTING
> diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
> index d534ed901538..26a50603177c 100644
> --- a/arch/powerpc/kernel/stacktrace.c
> +++ b/arch/powerpc/kernel/stacktrace.c
> @@ -2,7 +2,7 @@
>   * Stack trace utility
>   *
>   * Copyright 2008 Christoph Hellwig, IBM Corp.
> - *
> + * Copyright 2018 SUSE Linux GmbH
>   *
>   *      This program is free software; you can redistribute it and/or
>   *      modify it under the terms of the GNU General Public License
> @@ -11,11 +11,16 @@
>   */
>  
>  #include <linux/export.h>
> +#include <linux/kallsyms.h>
> +#include <linux/module.h>
>  #include <linux/sched.h>
>  #include <linux/sched/debug.h>
> +#include <linux/sched/task_stack.h>
>  #include <linux/stacktrace.h>
>  #include <asm/ptrace.h>
>  #include <asm/processor.h>
> +#include <linux/ftrace.h>
> +#include <asm/kprobes.h>
>  
>  /*
>   * Save stack-backtrace addresses into a stack_trace buffer.
> @@ -76,3 +81,115 @@ save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
>   save_context_stack(trace, regs->gpr[1], current, 0);
>  }
>  EXPORT_SYMBOL_GPL(save_stack_trace_regs);
> +
> +#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE
> +int
> +save_stack_trace_tsk_reliable(struct task_struct *tsk,
> + struct stack_trace *trace)
> +{
> + unsigned long sp;
> + unsigned long stack_page = (unsigned long)task_stack_page(tsk);
> + unsigned long stack_end;
> + int graph_idx = 0;
> +
> + /*
> + * The last frame (unwinding first) may not yet have saved
> + * its LR onto the stack.
> + */
> + int firstframe = 1;
> +
> + if (tsk == current)
> + sp = current_stack_pointer();
> + else
> + sp = tsk->thread.ksp;
> +
> + stack_end = stack_page + THREAD_SIZE;
> + if (!is_idle_task(tsk)) {
> + /*
> + * For user tasks, this is the SP value loaded on
> + * kernel entry, see "PACAKSAVE(r13)" in _switch() and
> + * system_call_common()/EXCEPTION_PROLOG_COMMON().
> + *
> + * Likewise for non-swapper kernel threads,
> + * this also happens to be the top of the stack
> + * as setup by copy_thread().
> + *
> + * Note that stack backlinks are not properly setup by
> + * copy_thread() and thus, a forked task() will have
> + * an unreliable stack trace until it's been
> + * _switch()'ed to for the first time.
> + */
> + stack_end -= STACK_FRAME_OVERHEAD + sizeof(struct pt_regs);
> + } else {
> + /*
> + * idle tasks have a custom stack layout,
> + * c.f. cpu_idle_thread_init().
> + */
> + stack_end -= STACK_FRAME_OVERHEAD;
> + }
> +
> + if (sp < stack_page + sizeof(struct thread_struct) ||
> +    sp > stack_end - STACK_FRAME_MIN_SIZE) {
> + return 1;
> + }
> +
> + for (;;) {
> + unsigned long *stack = (unsigned long *) sp;
> + unsigned long newsp, ip;
> +
> + /* sanity check: ABI requires SP to be aligned 16 bytes. */
> + if (sp & 0xF)
> + return 1;
> +
> + /* Mark stacktraces with exception frames as unreliable. */
> + if (sp <= stack_end - STACK_INT_FRAME_SIZE &&
> +    stack[STACK_FRAME_MARKER] == STACK_FRAME_REGS_MARKER) {
> + return 1;
> + }
> +
> + newsp = stack[0];
> + /* Stack grows downwards; unwinder may only go up. */
> + if (newsp <= sp)
> + return 1;
> +
> + if (newsp != stack_end &&
> +    newsp > stack_end - STACK_FRAME_MIN_SIZE) {
> + return 1; /* invalid backlink, too far up. */
> + }
> +
> + /* Examine the saved LR: it must point into kernel code. */
> + ip = stack[STACK_FRAME_LR_SAVE];
> + if (!firstframe && !__kernel_text_address(ip))
> + return 1;
> + firstframe = 0;
> +
> + /*
> + * FIXME: IMHO these tests do not belong in
> + * arch-dependent code, they are generic.
> + */
> + ip = ftrace_graph_ret_addr(tsk, &graph_idx, ip, NULL);
> +
> + /*
> + * Mark stacktraces with kretprobed functions on them
> + * as unreliable.
> + */
> + if (ip == (unsigned long)kretprobe_trampoline)
> + return 1;
> +
> + if (!trace->skip)
> + trace->entries[trace->nr_entries++] = ip;
> + else
> + trace->skip--;
> +
> + if (newsp == stack_end)
> + break;
> +
> + if (trace->nr_entries >= trace->max_entries)
> + return -E2BIG;
> +
> + sp = newsp;
> + }
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(save_stack_trace_tsk_reliable);
> +#endif /* CONFIG_HAVE_RELIABLE_STACKTRACE */
>

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

Re: [SRU][Bionic][Cosmic][PATCH 1/1] powerpc/livepatch: Implement reliable stack tracing for the consistency model

Khaled Elmously
In reply to this post by Joseph Salisbury-3
On 2018-06-05 11:52:15 , Joseph Salisbury wrote:

> On 06/04/2018 10:11 PM, Michael Ellerman wrote:
> > Joseph Salisbury <[hidden email]> writes:
> >> diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
> >> index d534ed901538..26a50603177c 100644
> >> --- a/arch/powerpc/kernel/stacktrace.c
> >> +++ b/arch/powerpc/kernel/stacktrace.c
> >> @@ -76,3 +81,115 @@ save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
> >>   save_context_stack(trace, regs->gpr[1], current, 0);
> >>  }
> >>  EXPORT_SYMBOL_GPL(save_stack_trace_regs);
> >> +
> >> +#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE
> >> +int
> >> +save_stack_trace_tsk_reliable(struct task_struct *tsk,
> >> + struct stack_trace *trace)
> >> +{
> > ...
> >> + /*
> >> + * Mark stacktraces with kretprobed functions on them
> >> + * as unreliable.
> >> + */
> >> + if (ip == (unsigned long)kretprobe_trampoline)
> >> + return 1;
> > You may also want:
> >
> > 5e3f0d15ae5f ("powerpc/livepatch: Fix build error with kprobes disabled.")
> >
> > cheers
>
> Thanks for the feedback, Michael.  Would it be possible for you to post
> this comment to the bug?  That way we can get feedback from the original
> bug reporter and IBM.
>
>
> Thanks,
>
>
> Joe

Joe, I did not apply this patch as it is not clear to me if a V2 is on the way or if it needs more work. Please advise.

Thanks

Khalid

>
>
>
> --
> kernel-team mailing list
> [hidden email]
> https://lists.ubuntu.com/mailman/listinfo/kernel-team

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

Re: [SRU][Bionic][Cosmic][PATCH 1/1] powerpc/livepatch: Implement reliable stack tracing for the consistency model

Joseph Salisbury-3
On 06/07/2018 01:37 PM, Khaled Elmously wrote:

> On 2018-06-05 11:52:15 , Joseph Salisbury wrote:
>> On 06/04/2018 10:11 PM, Michael Ellerman wrote:
>>> Joseph Salisbury <[hidden email]> writes:
>>>> diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
>>>> index d534ed901538..26a50603177c 100644
>>>> --- a/arch/powerpc/kernel/stacktrace.c
>>>> +++ b/arch/powerpc/kernel/stacktrace.c
>>>> @@ -76,3 +81,115 @@ save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
>>>>   save_context_stack(trace, regs->gpr[1], current, 0);
>>>>  }
>>>>  EXPORT_SYMBOL_GPL(save_stack_trace_regs);
>>>> +
>>>> +#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE
>>>> +int
>>>> +save_stack_trace_tsk_reliable(struct task_struct *tsk,
>>>> + struct stack_trace *trace)
>>>> +{
>>> ...
>>>> + /*
>>>> + * Mark stacktraces with kretprobed functions on them
>>>> + * as unreliable.
>>>> + */
>>>> + if (ip == (unsigned long)kretprobe_trampoline)
>>>> + return 1;
>>> You may also want:
>>>
>>> 5e3f0d15ae5f ("powerpc/livepatch: Fix build error with kprobes disabled.")
>>>
>>> cheers
>> Thanks for the feedback, Michael.  Would it be possible for you to post
>> this comment to the bug?  That way we can get feedback from the original
>> bug reporter and IBM.
>>
>>
>> Thanks,
>>
>>
>> Joe
> Joe, I did not apply this patch as it is not clear to me if a V2 is on the way or if it needs more work. Please advise.
>
> Thanks
>
> Khalid
I don't plan on sending a V2.  IBM tested what is in this SRU request
and it was confirmed to fix the bug for them.  I would say apply what is
in this request.

Michael, it's probably best to open a new bug if you believe additional
commits are needed.

>
>>
>>
>> --
>> kernel-team mailing list
>> [hidden email]
>> https://lists.ubuntu.com/mailman/listinfo/kernel-team



--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

Re: [SRU][Bionic][Cosmic][PATCH 1/1] powerpc/livepatch: Implement reliable stack tracing for the consistency model

Khaled Elmously

On 2018-06-07 15:03:25 , Joseph Salisbury wrote:

> On 06/07/2018 01:37 PM, Khaled Elmously wrote:
> > On 2018-06-05 11:52:15 , Joseph Salisbury wrote:
> >> On 06/04/2018 10:11 PM, Michael Ellerman wrote:
> >>> Joseph Salisbury <[hidden email]> writes:
> >>>> diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
> >>>> index d534ed901538..26a50603177c 100644
> >>>> --- a/arch/powerpc/kernel/stacktrace.c
> >>>> +++ b/arch/powerpc/kernel/stacktrace.c
> >>>> @@ -76,3 +81,115 @@ save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
> >>>>   save_context_stack(trace, regs->gpr[1], current, 0);
> >>>>  }
> >>>>  EXPORT_SYMBOL_GPL(save_stack_trace_regs);
> >>>> +
> >>>> +#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE
> >>>> +int
> >>>> +save_stack_trace_tsk_reliable(struct task_struct *tsk,
> >>>> + struct stack_trace *trace)
> >>>> +{
> >>> ...
> >>>> + /*
> >>>> + * Mark stacktraces with kretprobed functions on them
> >>>> + * as unreliable.
> >>>> + */
> >>>> + if (ip == (unsigned long)kretprobe_trampoline)
> >>>> + return 1;
> >>> You may also want:
> >>>
> >>> 5e3f0d15ae5f ("powerpc/livepatch: Fix build error with kprobes disabled.")
> >>>
> >>> cheers
> >> Thanks for the feedback, Michael.  Would it be possible for you to post
> >> this comment to the bug?  That way we can get feedback from the original
> >> bug reporter and IBM.
> >>
> >>
> >> Thanks,
> >>
> >>
> >> Joe
> > Joe, I did not apply this patch as it is not clear to me if a V2 is on the way or if it needs more work. Please advise.
> >
> > Thanks
> >
> > Khalid
> I don't plan on sending a V2.  IBM tested what is in this SRU request
> and it was confirmed to fix the bug for them.  I would say apply what is
> in this request.
>
> Michael, it's probably best to open a new bug if you believe additional
> commits are needed.

Cool, thanks

>
> >
> >>
> >>
> >> --
> >> kernel-team mailing list
> >> [hidden email]
> >> https://lists.ubuntu.com/mailman/listinfo/kernel-team
>
>

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

APPLIED: [SRU][Bionic][Cosmic][PATCH 0/1] powerpc/livepatch: Implement reliable stack tracing for the consistency model

Khaled Elmously
In reply to this post by Joseph Salisbury-3
Applied to Bionic


On 2018-05-23 13:16:25 , Joseph Salisbury wrote:

> BugLink: http://bugs.launchpad.net/bugs/1771844
>
> == SRU Justification ==
> Livepatch has a consistency model which is a hybrid of kGraft and kpatch:
> it uses kGraft's per-task consistency and syscall barrier switching
> combined with kpatch's stack trace switching. The current approach is
> stack checking of sleeping tasks. If no affected functions are on the
> stack of a given task, the task is patched. In most cases this will patch
> most or all of the tasks on the first try. Otherwise, it'll keep trying
> periodically. This patch implements the reliable stack tracing for
> consistency model a.k.a HAVE_RELIABLE_STACKTRACE.
>
> This will help in switching livepatching implementation to basic per-task
> consistency model. It is the foundation, which will help us enable
> security patches changing function or data semantics. This is the biggest
> remaining piece needed on ppc64le to make livepatch more generally useful.
>
> == Fix ==
> df78d3f61480 ("powerpc/livepatch: Implement reliable stack tracing for the consistency model")
>
> == Regression Potential ==
> Low. Limited to powerpc.
>
> == Test Case ==
> A test kernel was built with this patch and tested by the original bug reporter.
> The bug reporter states the test kernel resolved the bug.
>
>
> Torsten Duwe (1):
>   powerpc/livepatch: Implement reliable stack tracing for the
>     consistency model
>
>  arch/powerpc/Kconfig             |   1 +
>  arch/powerpc/kernel/stacktrace.c | 119 ++++++++++++++++++++++++++++++-
>  2 files changed, 119 insertions(+), 1 deletion(-)
>
> --
> 2.17.0
>
>
> --
> kernel-team mailing list
> [hidden email]
> https://lists.ubuntu.com/mailman/listinfo/kernel-team

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

Re: [SRU][Bionic][Cosmic][PATCH 1/1] powerpc/livepatch: Implement reliable stack tracing for the consistency model

Michael Ellerman-2
In reply to this post by Joseph Salisbury-3
Joseph Salisbury <[hidden email]> writes:

> On 06/07/2018 01:37 PM, Khaled Elmously wrote:
>> On 2018-06-05 11:52:15 , Joseph Salisbury wrote:
>>> On 06/04/2018 10:11 PM, Michael Ellerman wrote:
>>>> Joseph Salisbury <[hidden email]> writes:
>>>>> diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c
>>>>> index d534ed901538..26a50603177c 100644
>>>>> --- a/arch/powerpc/kernel/stacktrace.c
>>>>> +++ b/arch/powerpc/kernel/stacktrace.c
>>>>> @@ -76,3 +81,115 @@ save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
>>>>>   save_context_stack(trace, regs->gpr[1], current, 0);
>>>>>  }
>>>>>  EXPORT_SYMBOL_GPL(save_stack_trace_regs);
>>>>> +
>>>>> +#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE
>>>>> +int
>>>>> +save_stack_trace_tsk_reliable(struct task_struct *tsk,
>>>>> + struct stack_trace *trace)
>>>>> +{
>>>> ...
>>>>> + /*
>>>>> + * Mark stacktraces with kretprobed functions on them
>>>>> + * as unreliable.
>>>>> + */
>>>>> + if (ip == (unsigned long)kretprobe_trampoline)
>>>>> + return 1;
>>>> You may also want:
>>>>
>>>> 5e3f0d15ae5f ("powerpc/livepatch: Fix build error with kprobes disabled.")
>>>>
>>> Thanks for the feedback, Michael.  Would it be possible for you to post
>>> this comment to the bug?  That way we can get feedback from the original
>>> bug reporter and IBM.
>>>
>> Joe, I did not apply this patch as it is not clear to me if a V2 is on the way or if it needs more work. Please advise.
>
> I don't plan on sending a V2.  IBM tested what is in this SRU request
> and it was confirmed to fix the bug for them.  I would say apply what is
> in this request.
>
> Michael, it's probably best to open a new bug if you believe additional
> commits are needed.

OK, it just fixes a build break under some configs, so if you haven't
hit it then you probably don't need the fix.

cheers

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

APPLIED[U]: [SRU][Bionic][Cosmic][PATCH 1/1] powerpc/livepatch: Implement reliable stack tracing for the consistency model

Thadeu Lima de Souza Cascardo-3
In reply to this post by Joseph Salisbury-3
Applied to unstable master branch.

Thanks.
Cascardo.

Applied-to: unstable/master

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team