[v2][SRU Artful/Bionic][Patch 0/1] fix false positives in W+X checking

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

[v2][SRU Artful/Bionic][Patch 0/1] fix false positives in W+X checking

Manoj Iyer
Please consider this patch to Bionic, Artful. On
ARM64 system from Cavium and Qualcomm we see random false positive
warning messages wrt W+X checking.
"arm64/mm: Found insecure W+X mapping at address 0000000000a99000/0xa99000"
while booting.

This patch was cherry-picked cleanly on to bionic and artful, previously
I had the same patch cherry-picked from linux-next and tested that patch
on QTI and Cavium systems. I am resending this patch again as a
cherry-pick from linus tree.

I will submit a backport of this patch for xenial shortly.



--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[PATCH] init: fix false positives in W+X checking

Manoj Iyer
From: Jeffrey Hugo <[hidden email]>

load_module() creates W+X mappings via __vmalloc_node_range() (from
layout_and_allocate()->move_module()->module_alloc()) by using
PAGE_KERNEL_EXEC.  These mappings are later cleaned up via
"call_rcu_sched(&freeinit->rcu, do_free_init)" from do_init_module().

This is a problem because call_rcu_sched() queues work, which can be run
after debug_checkwx() is run, resulting in a race condition.  If hit,
the race results in a nasty splat about insecure W+X mappings, which
results in a poor user experience as these are not the mappings that
debug_checkwx() is intended to catch.

This issue is observed on multiple arm64 platforms, and has been
artificially triggered on an x86 platform.

Address the race by flushing the queued work before running the
arch-defined mark_rodata_ro() which then calls debug_checkwx().

Link: http://lkml.kernel.org/r/1525103946-29526-1-git-send-email-jhugo@...
Fixes: e1a58320a38d ("x86/mm: Warn on W^X mappings")

BugLink: https://launchpad.net/bugs/1769696

Signed-off-by: Jeffrey Hugo <[hidden email]>
Reported-by: Timur Tabi <[hidden email]>
Reported-by: Jan Glauber <[hidden email]>
Acked-by: Kees Cook <[hidden email]>
Acked-by: Ingo Molnar <[hidden email]>
Acked-by: Will Deacon <[hidden email]>
Acked-by: Laura Abbott <[hidden email]>
Cc: Mark Rutland <[hidden email]>
Cc: Ard Biesheuvel <[hidden email]>
Cc: Catalin Marinas <[hidden email]>
Cc: Stephen Smalley <[hidden email]>
Cc: Thomas Gleixner <[hidden email]>
Cc: Peter Zijlstra <[hidden email]>
Signed-off-by: Andrew Morton <[hidden email]>
Signed-off-by: Linus Torvalds <[hidden email]>
(cherry picked from commit ae646f0b9ca135b87bc73ff606ef996c3029780a)
Signed-off-by: Manoj Iyer <[hidden email]>
---
 init/main.c     | 7 +++++++
 kernel/module.c | 5 +++++
 2 files changed, 12 insertions(+)

diff --git a/init/main.c b/init/main.c
index b8b121c17ff1..44f88af9b191 100644
--- a/init/main.c
+++ b/init/main.c
@@ -980,6 +980,13 @@ __setup("rodata=", set_debug_rodata);
 static void mark_readonly(void)
 {
  if (rodata_enabled) {
+ /*
+ * load_module() results in W+X mappings, which are cleaned up
+ * with call_rcu_sched().  Let's make sure that queued work is
+ * flushed so that we don't hit false positives looking for
+ * insecure pages which are W+X.
+ */
+ rcu_barrier_sched();
  mark_rodata_ro();
  rodata_test();
  } else
diff --git a/kernel/module.c b/kernel/module.c
index 2612f760df84..0da7f3468350 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -3517,6 +3517,11 @@ static noinline int do_init_module(struct module *mod)
  * walking this with preempt disabled.  In all the failure paths, we
  * call synchronize_sched(), but we don't want to slow down the success
  * path, so use actual RCU here.
+ * Note that module_alloc() on most architectures creates W+X page
+ * mappings which won't be cleaned up until do_free_init() runs.  Any
+ * code such as mark_rodata_ro() which depends on those mappings to
+ * be cleaned up needs to sync with the queued work - ie
+ * rcu_barrier_sched()
  */
  call_rcu_sched(&freeinit->rcu, do_free_init);
  mutex_unlock(&module_mutex);
--
2.17.0


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

ACK: [PATCH] init: fix false positives in W+X checking

Kleber Souza
On 05/15/18 16:36, Manoj Iyer wrote:

> From: Jeffrey Hugo <[hidden email]>
>
> load_module() creates W+X mappings via __vmalloc_node_range() (from
> layout_and_allocate()->move_module()->module_alloc()) by using
> PAGE_KERNEL_EXEC.  These mappings are later cleaned up via
> "call_rcu_sched(&freeinit->rcu, do_free_init)" from do_init_module().
>
> This is a problem because call_rcu_sched() queues work, which can be run
> after debug_checkwx() is run, resulting in a race condition.  If hit,
> the race results in a nasty splat about insecure W+X mappings, which
> results in a poor user experience as these are not the mappings that
> debug_checkwx() is intended to catch.
>
> This issue is observed on multiple arm64 platforms, and has been
> artificially triggered on an x86 platform.
>
> Address the race by flushing the queued work before running the
> arch-defined mark_rodata_ro() which then calls debug_checkwx().
>
> Link: http://lkml.kernel.org/r/1525103946-29526-1-git-send-email-jhugo@...
> Fixes: e1a58320a38d ("x86/mm: Warn on W^X mappings")
>
> BugLink: https://launchpad.net/bugs/1769696
>
> Signed-off-by: Jeffrey Hugo <[hidden email]>
> Reported-by: Timur Tabi <[hidden email]>
> Reported-by: Jan Glauber <[hidden email]>
> Acked-by: Kees Cook <[hidden email]>
> Acked-by: Ingo Molnar <[hidden email]>
> Acked-by: Will Deacon <[hidden email]>
> Acked-by: Laura Abbott <[hidden email]>
> Cc: Mark Rutland <[hidden email]>
> Cc: Ard Biesheuvel <[hidden email]>
> Cc: Catalin Marinas <[hidden email]>
> Cc: Stephen Smalley <[hidden email]>
> Cc: Thomas Gleixner <[hidden email]>
> Cc: Peter Zijlstra <[hidden email]>
> Signed-off-by: Andrew Morton <[hidden email]>
> Signed-off-by: Linus Torvalds <[hidden email]>
> (cherry picked from commit ae646f0b9ca135b87bc73ff606ef996c3029780a)
> Signed-off-by: Manoj Iyer <[hidden email]>

Clean cherry-pick from upstream:

Acked-by: Kleber Sacilotto de Souza <[hidden email]>

> ---
>  init/main.c     | 7 +++++++
>  kernel/module.c | 5 +++++
>  2 files changed, 12 insertions(+)
>
> diff --git a/init/main.c b/init/main.c
> index b8b121c17ff1..44f88af9b191 100644
> --- a/init/main.c
> +++ b/init/main.c
> @@ -980,6 +980,13 @@ __setup("rodata=", set_debug_rodata);
>  static void mark_readonly(void)
>  {
>   if (rodata_enabled) {
> + /*
> + * load_module() results in W+X mappings, which are cleaned up
> + * with call_rcu_sched().  Let's make sure that queued work is
> + * flushed so that we don't hit false positives looking for
> + * insecure pages which are W+X.
> + */
> + rcu_barrier_sched();
>   mark_rodata_ro();
>   rodata_test();
>   } else
> diff --git a/kernel/module.c b/kernel/module.c
> index 2612f760df84..0da7f3468350 100644
> --- a/kernel/module.c
> +++ b/kernel/module.c
> @@ -3517,6 +3517,11 @@ static noinline int do_init_module(struct module *mod)
>   * walking this with preempt disabled.  In all the failure paths, we
>   * call synchronize_sched(), but we don't want to slow down the success
>   * path, so use actual RCU here.
> + * Note that module_alloc() on most architectures creates W+X page
> + * mappings which won't be cleaned up until do_free_init() runs.  Any
> + * code such as mark_rodata_ro() which depends on those mappings to
> + * be cleaned up needs to sync with the queued work - ie
> + * rcu_barrier_sched()
>   */
>   call_rcu_sched(&freeinit->rcu, do_free_init);
>   mutex_unlock(&module_mutex);
>

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

APPLIED/cmnt: [v2][SRU Artful/Bionic][Patch 0/1] fix false positives in W+X checking

Kleber Souza
In reply to this post by Manoj Iyer
On 05/15/18 16:36, Manoj Iyer wrote:

> Please consider this patch to Bionic, Artful. On
> ARM64 system from Cavium and Qualcomm we see random false positive
> warning messages wrt W+X checking.
> "arm64/mm: Found insecure W+X mapping at address 0000000000a99000/0xa99000"
> while booting.
>
> This patch was cherry-picked cleanly on to bionic and artful, previously
> I had the same patch cherry-picked from linux-next and tested that patch
> on QTI and Cavium systems. I am resending this patch again as a
> cherry-pick from linus tree.
>
> I will submit a backport of this patch for xenial shortly.
>
>
>

Applied to artful/master-next and bionic/master-next branches also with
the ACK from Joseph Salisbury given on v1 (since the only change was the
upstream sha1 reference).

Thanks,
Kleber

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team