[SRU][Bionic/Artful] fix false positives in W+X checking

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

[SRU][Bionic/Artful] fix false positives in W+X checking

Manoj Iyer
Please consider this patch to Bionic, Artful and apply to Cosmic. On
ARM64 system from Cavium and Qualcomm we see random false positive
warning messages wrt W+X checking.  "arm64/mm: Found insecure W+X mapping at address 0000000000a99000/0xa99000" while booting.

A kernel with the upstream patch is avaliable in ppa:manjo/lp1769696,
the patch was cleanly cherry-picked from linux-next on to bionic and
also cleanly applies to Artful. I tested the kernel on a QTI QDF2400 and
Cavium ThunderX system. Since we cannot reliably and consistently
reproduce the warning, I did not see the warning after doing repeated
reboots with stock bionic.



--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[PATCH] init: fix false positives in W+X checking

Manoj Iyer
From: Jeffrey Hugo <[hidden email]>

load_module() creates W+X mappings via __vmalloc_node_range() (from
layout_and_allocate()->move_module()->module_alloc()) by using
PAGE_KERNEL_EXEC.  These mappings are later cleaned up via
"call_rcu_sched(&freeinit->rcu, do_free_init)" from do_init_module().

This is a problem because call_rcu_sched() queues work, which can be run
after debug_checkwx() is run, resulting in a race condition.  If hit, the
race results in a nasty splat about insecure W+X mappings, which results
in a poor user experience as these are not the mappings that
debug_checkwx() is intended to catch.

This issue is observed on multiple arm64 platforms, and has been
artificially triggered on an x86 platform.

Address the race by flushing the queued work before running the
arch-defined mark_rodata_ro() which then calls debug_checkwx().

BugLink: https://launchpad.net/bugs/1769696

Link: http://lkml.kernel.org/r/1525103946-29526-1-git-send-email-jhugo@...
Fixes: e1a58320a38d ("x86/mm: Warn on W^X mappings")
Signed-off-by: Jeffrey Hugo <[hidden email]>
Reported-by: Timur Tabi <[hidden email]>
Reported-by: Jan Glauber <[hidden email]>
Acked-by: Kees Cook <[hidden email]>
Acked-by: Ingo Molnar <[hidden email]>
Acked-by: Will Deacon <[hidden email]>
Acked-by: Laura Abbott <[hidden email]>
Cc: Mark Rutland <[hidden email]>
Cc: Ard Biesheuvel <[hidden email]>
Cc: Catalin Marinas <[hidden email]>
Cc: Stephen Smalley <[hidden email]>
Cc: Thomas Gleixner <[hidden email]>
Cc: Peter Zijlstra <[hidden email]>
Signed-off-by: Andrew Morton <[hidden email]>
Signed-off-by: Stephen Rothwell <[hidden email]>
(cherry picked from commit 65d313ee1a7d41611b8ee6063db53bc976db5ba2
linux-next)
Signed-off-by: Manoj Iyer <[hidden email]>
---
 init/main.c     | 7 +++++++
 kernel/module.c | 5 +++++
 2 files changed, 12 insertions(+)

diff --git a/init/main.c b/init/main.c
index b8b121c17ff1..44f88af9b191 100644
--- a/init/main.c
+++ b/init/main.c
@@ -980,6 +980,13 @@ __setup("rodata=", set_debug_rodata);
 static void mark_readonly(void)
 {
  if (rodata_enabled) {
+ /*
+ * load_module() results in W+X mappings, which are cleaned up
+ * with call_rcu_sched().  Let's make sure that queued work is
+ * flushed so that we don't hit false positives looking for
+ * insecure pages which are W+X.
+ */
+ rcu_barrier_sched();
  mark_rodata_ro();
  rodata_test();
  } else
diff --git a/kernel/module.c b/kernel/module.c
index 2612f760df84..0da7f3468350 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -3517,6 +3517,11 @@ static noinline int do_init_module(struct module *mod)
  * walking this with preempt disabled.  In all the failure paths, we
  * call synchronize_sched(), but we don't want to slow down the success
  * path, so use actual RCU here.
+ * Note that module_alloc() on most architectures creates W+X page
+ * mappings which won't be cleaned up until do_free_init() runs.  Any
+ * code such as mark_rodata_ro() which depends on those mappings to
+ * be cleaned up needs to sync with the queued work - ie
+ * rcu_barrier_sched()
  */
  call_rcu_sched(&freeinit->rcu, do_free_init);
  mutex_unlock(&module_mutex);
--
2.17.0


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

Re: [SRU][Bionic/Artful] fix false positives in W+X checking

Paolo Pisati-5
In reply to this post by Manoj Iyer
On Tue, May 8, 2018 at 6:24 PM, Manoj Iyer <[hidden email]> wrote:

> Please consider this patch to Bionic, Artful and apply to Cosmic. On
> ARM64 system from Cavium and Qualcomm we see random false positive
> warning messages wrt W+X checking.  "arm64/mm: Found insecure W+X mapping at address 0000000000a99000/0xa99000" while booting.
>
> A kernel with the upstream patch is avaliable in ppa:manjo/lp1769696,
> the patch was cleanly cherry-picked from linux-next on to bionic and
> also cleanly applies to Artful. I tested the kernel on a QTI QDF2400 and
> Cavium ThunderX system. Since we cannot reliably and consistently
> reproduce the warning, I did not see the warning after doing repeated
> reboots with stock bionic.

Which patch?
There's no patch attached to this message, resend following the SRU guideline:

https://wiki.ubuntu.com/Kernel/Dev/StablePatchFormat

--
bye,
p.

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

Re: [SRU][Bionic/Artful] fix false positives in W+X checking

Manoj Iyer
On Wed, 9 May 2018, Paolo Pisati wrote:

> On Tue, May 8, 2018 at 6:24 PM, Manoj Iyer <[hidden email]> wrote:
>> Please consider this patch to Bionic, Artful and apply to Cosmic. On
>> ARM64 system from Cavium and Qualcomm we see random false positive
>> warning messages wrt W+X checking.  "arm64/mm: Found insecure W+X mapping at address 0000000000a99000/0xa99000" while booting.
>>
>> A kernel with the upstream patch is avaliable in ppa:manjo/lp1769696,
>> the patch was cleanly cherry-picked from linux-next on to bionic and
>> also cleanly applies to Artful. I tested the kernel on a QTI QDF2400 and
>> Cavium ThunderX system. Since we cannot reliably and consistently
>> reproduce the warning, I did not see the warning after doing repeated
>> reboots with stock bionic.
>
> Which patch?
> There's no patch attached to this message, resend following the SRU guideline:

Here it is on the mailing list.. did it now show up on your inbox?

[SRU][Bionic/Artful] fix false positives in W+X checking   Manoj Iyer
[PATCH] init: fix false positives in W+X checking   Manoj Iyer
[SRU][Bionic/Artful] fix false positives in W+X checking   Paolo Pisati

>
> https://wiki.ubuntu.com/Kernel/Dev/StablePatchFormat
>
> --
> bye,
> p.
>
>

--
============================
Manoj Iyer
Ubuntu/Canonical
ARM Servers - Cloud
============================

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

ACK/cmnt: [PATCH] init: fix false positives in W+X checking

Joseph Salisbury-3
In reply to this post by Manoj Iyer
On 05/08/2018 12:24 PM, Manoj Iyer wrote:

> From: Jeffrey Hugo <[hidden email]>
>
> load_module() creates W+X mappings via __vmalloc_node_range() (from
> layout_and_allocate()->move_module()->module_alloc()) by using
> PAGE_KERNEL_EXEC.  These mappings are later cleaned up via
> "call_rcu_sched(&freeinit->rcu, do_free_init)" from do_init_module().
>
> This is a problem because call_rcu_sched() queues work, which can be run
> after debug_checkwx() is run, resulting in a race condition.  If hit, the
> race results in a nasty splat about insecure W+X mappings, which results
> in a poor user experience as these are not the mappings that
> debug_checkwx() is intended to catch.
>
> This issue is observed on multiple arm64 platforms, and has been
> artificially triggered on an x86 platform.
>
> Address the race by flushing the queued work before running the
> arch-defined mark_rodata_ro() which then calls debug_checkwx().
>
> BugLink: https://launchpad.net/bugs/1769696
>
> Link: http://lkml.kernel.org/r/1525103946-29526-1-git-send-email-jhugo@...
> Fixes: e1a58320a38d ("x86/mm: Warn on W^X mappings")
> Signed-off-by: Jeffrey Hugo <[hidden email]>
> Reported-by: Timur Tabi <[hidden email]>
> Reported-by: Jan Glauber <[hidden email]>
> Acked-by: Kees Cook <[hidden email]>
> Acked-by: Ingo Molnar <[hidden email]>
> Acked-by: Will Deacon <[hidden email]>
> Acked-by: Laura Abbott <[hidden email]>
> Cc: Mark Rutland <[hidden email]>
> Cc: Ard Biesheuvel <[hidden email]>
> Cc: Catalin Marinas <[hidden email]>
> Cc: Stephen Smalley <[hidden email]>
> Cc: Thomas Gleixner <[hidden email]>
> Cc: Peter Zijlstra <[hidden email]>
> Signed-off-by: Andrew Morton <[hidden email]>
> Signed-off-by: Stephen Rothwell <[hidden email]>
> (cherry picked from commit 65d313ee1a7d41611b8ee6063db53bc976db5ba2
> linux-next)
> Signed-off-by: Manoj Iyer <[hidden email]>
> ---
>  init/main.c     | 7 +++++++
>  kernel/module.c | 5 +++++
>  2 files changed, 12 insertions(+)
>
> diff --git a/init/main.c b/init/main.c
> index b8b121c17ff1..44f88af9b191 100644
> --- a/init/main.c
> +++ b/init/main.c
> @@ -980,6 +980,13 @@ __setup("rodata=", set_debug_rodata);
>  static void mark_readonly(void)
>  {
>   if (rodata_enabled) {
> + /*
> + * load_module() results in W+X mappings, which are cleaned up
> + * with call_rcu_sched().  Let's make sure that queued work is
> + * flushed so that we don't hit false positives looking for
> + * insecure pages which are W+X.
> + */
> + rcu_barrier_sched();
>   mark_rodata_ro();
>   rodata_test();
>   } else
> diff --git a/kernel/module.c b/kernel/module.c
> index 2612f760df84..0da7f3468350 100644
> --- a/kernel/module.c
> +++ b/kernel/module.c
> @@ -3517,6 +3517,11 @@ static noinline int do_init_module(struct module *mod)
>   * walking this with preempt disabled.  In all the failure paths, we
>   * call synchronize_sched(), but we don't want to slow down the success
>   * path, so use actual RCU here.
> + * Note that module_alloc() on most architectures creates W+X page
> + * mappings which won't be cleaned up until do_free_init() runs.  Any
> + * code such as mark_rodata_ro() which depends on those mappings to
> + * be cleaned up needs to sync with the queued work - ie
> + * rcu_barrier_sched()
>   */
>   call_rcu_sched(&freeinit->rcu, do_free_init);
>   mutex_unlock(&module_mutex);
Hi Manoj,

The patch says that it fixes e1a58320a38d.  This commit is in mainline
as of v4.4-rc1.  However, this SRU request is only for Artful and
Bionic.  You may also want to investigate to see if it's needed in
Xenial.  If it is, the patch you submitted does not apply to Xenial and
you would need to submit a separate patch/SRU request that is specific
to Xenial.

For A and B, this patch applies and builds cleanly.  It fixes a specific
bug, so:

Acked-by: Joseph Salisbury <[hidden email]>



--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

Re: ACK/cmnt: [PATCH] init: fix false positives in W+X checking

Manoj Iyer
On Wed, 9 May 2018, Joseph Salisbury wrote:

> On 05/08/2018 12:24 PM, Manoj Iyer wrote:
>> From: Jeffrey Hugo <[hidden email]>
>>
>> load_module() creates W+X mappings via __vmalloc_node_range() (from
>> layout_and_allocate()->move_module()->module_alloc()) by using
>> PAGE_KERNEL_EXEC.  These mappings are later cleaned up via
>> "call_rcu_sched(&freeinit->rcu, do_free_init)" from do_init_module().
>>
>> This is a problem because call_rcu_sched() queues work, which can be run
>> after debug_checkwx() is run, resulting in a race condition.  If hit, the
>> race results in a nasty splat about insecure W+X mappings, which results
>> in a poor user experience as these are not the mappings that
>> debug_checkwx() is intended to catch.
>>
>> This issue is observed on multiple arm64 platforms, and has been
>> artificially triggered on an x86 platform.
>>
>> Address the race by flushing the queued work before running the
>> arch-defined mark_rodata_ro() which then calls debug_checkwx().
>>
>> BugLink: https://launchpad.net/bugs/1769696
>>
>> Link: http://lkml.kernel.org/r/1525103946-29526-1-git-send-email-jhugo@...
>> Fixes: e1a58320a38d ("x86/mm: Warn on W^X mappings")
>> Signed-off-by: Jeffrey Hugo <[hidden email]>
>> Reported-by: Timur Tabi <[hidden email]>
>> Reported-by: Jan Glauber <[hidden email]>
>> Acked-by: Kees Cook <[hidden email]>
>> Acked-by: Ingo Molnar <[hidden email]>
>> Acked-by: Will Deacon <[hidden email]>
>> Acked-by: Laura Abbott <[hidden email]>
>> Cc: Mark Rutland <[hidden email]>
>> Cc: Ard Biesheuvel <[hidden email]>
>> Cc: Catalin Marinas <[hidden email]>
>> Cc: Stephen Smalley <[hidden email]>
>> Cc: Thomas Gleixner <[hidden email]>
>> Cc: Peter Zijlstra <[hidden email]>
>> Signed-off-by: Andrew Morton <[hidden email]>
>> Signed-off-by: Stephen Rothwell <[hidden email]>
>> (cherry picked from commit 65d313ee1a7d41611b8ee6063db53bc976db5ba2
>> linux-next)
>> Signed-off-by: Manoj Iyer <[hidden email]>
>> ---
>>  init/main.c     | 7 +++++++
>>  kernel/module.c | 5 +++++
>>  2 files changed, 12 insertions(+)
>>
>> diff --git a/init/main.c b/init/main.c
>> index b8b121c17ff1..44f88af9b191 100644
>> --- a/init/main.c
>> +++ b/init/main.c
>> @@ -980,6 +980,13 @@ __setup("rodata=", set_debug_rodata);
>>  static void mark_readonly(void)
>>  {
>>   if (rodata_enabled) {
>> + /*
>> + * load_module() results in W+X mappings, which are cleaned up
>> + * with call_rcu_sched().  Let's make sure that queued work is
>> + * flushed so that we don't hit false positives looking for
>> + * insecure pages which are W+X.
>> + */
>> + rcu_barrier_sched();
>>   mark_rodata_ro();
>>   rodata_test();
>>   } else
>> diff --git a/kernel/module.c b/kernel/module.c
>> index 2612f760df84..0da7f3468350 100644
>> --- a/kernel/module.c
>> +++ b/kernel/module.c
>> @@ -3517,6 +3517,11 @@ static noinline int do_init_module(struct module *mod)
>>   * walking this with preempt disabled.  In all the failure paths, we
>>   * call synchronize_sched(), but we don't want to slow down the success
>>   * path, so use actual RCU here.
>> + * Note that module_alloc() on most architectures creates W+X page
>> + * mappings which won't be cleaned up until do_free_init() runs.  Any
>> + * code such as mark_rodata_ro() which depends on those mappings to
>> + * be cleaned up needs to sync with the queued work - ie
>> + * rcu_barrier_sched()
>>   */
>>   call_rcu_sched(&freeinit->rcu, do_free_init);
>>   mutex_unlock(&module_mutex);
> Hi Manoj,
>
> The patch says that it fixes e1a58320a38d.  This commit is in mainline
> as of v4.4-rc1.  However, this SRU request is only for Artful and
> Bionic.  You may also want to investigate to see if it's needed in
> Xenial.  If it is, the patch you submitted does not apply to Xenial and
> you would need to submit a separate patch/SRU request that is specific
> to Xenial.
That is correct. At this time we are only interested in fixing it in
Artful (Xenial linux-hwe) and Bionic, and apply to Cosmic (if applicable).
The platforms we are interested in were certified with Xenial and
linux-hwe.

>
> For A and B, this patch applies and builds cleanly.  It fixes a specific
> bug, so:
>
> Acked-by: Joseph Salisbury <[hidden email]>
>
>
>

--
============================
Manoj Iyer
Ubuntu/Canonical
ARM Servers - Cloud
============================
--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

Re: ACK/cmnt: [PATCH] init: fix false positives in W+X checking

dann frazier-4
On Wed, May 9, 2018 at 2:32 PM, Manoj Iyer <[hidden email]> wrote:

> On Wed, 9 May 2018, Joseph Salisbury wrote:
>
>> On 05/08/2018 12:24 PM, Manoj Iyer wrote:
>>>
>>> From: Jeffrey Hugo <[hidden email]>
>>>
>>> load_module() creates W+X mappings via __vmalloc_node_range() (from
>>> layout_and_allocate()->move_module()->module_alloc()) by using
>>> PAGE_KERNEL_EXEC.  These mappings are later cleaned up via
>>> "call_rcu_sched(&freeinit->rcu, do_free_init)" from do_init_module().
>>>
>>> This is a problem because call_rcu_sched() queues work, which can be run
>>> after debug_checkwx() is run, resulting in a race condition.  If hit, the
>>> race results in a nasty splat about insecure W+X mappings, which results
>>> in a poor user experience as these are not the mappings that
>>> debug_checkwx() is intended to catch.
>>>
>>> This issue is observed on multiple arm64 platforms, and has been
>>> artificially triggered on an x86 platform.
>>>
>>> Address the race by flushing the queued work before running the
>>> arch-defined mark_rodata_ro() which then calls debug_checkwx().
>>>
>>> BugLink: https://launchpad.net/bugs/1769696
>>>
>>> Link:
>>> http://lkml.kernel.org/r/1525103946-29526-1-git-send-email-jhugo@...
>>> Fixes: e1a58320a38d ("x86/mm: Warn on W^X mappings")
>>> Signed-off-by: Jeffrey Hugo <[hidden email]>
>>> Reported-by: Timur Tabi <[hidden email]>
>>> Reported-by: Jan Glauber <[hidden email]>
>>> Acked-by: Kees Cook <[hidden email]>
>>> Acked-by: Ingo Molnar <[hidden email]>
>>> Acked-by: Will Deacon <[hidden email]>
>>> Acked-by: Laura Abbott <[hidden email]>
>>> Cc: Mark Rutland <[hidden email]>
>>> Cc: Ard Biesheuvel <[hidden email]>
>>> Cc: Catalin Marinas <[hidden email]>
>>> Cc: Stephen Smalley <[hidden email]>
>>> Cc: Thomas Gleixner <[hidden email]>
>>> Cc: Peter Zijlstra <[hidden email]>
>>> Signed-off-by: Andrew Morton <[hidden email]>
>>> Signed-off-by: Stephen Rothwell <[hidden email]>
>>> (cherry picked from commit 65d313ee1a7d41611b8ee6063db53bc976db5ba2
>>> linux-next)
>>> Signed-off-by: Manoj Iyer <[hidden email]>
>>> ---
>>>  init/main.c     | 7 +++++++
>>>  kernel/module.c | 5 +++++
>>>  2 files changed, 12 insertions(+)
>>>
>>> diff --git a/init/main.c b/init/main.c
>>> index b8b121c17ff1..44f88af9b191 100644
>>> --- a/init/main.c
>>> +++ b/init/main.c
>>> @@ -980,6 +980,13 @@ __setup("rodata=", set_debug_rodata);
>>>  static void mark_readonly(void)
>>>  {
>>>         if (rodata_enabled) {
>>> +               /*
>>> +                * load_module() results in W+X mappings, which are
>>> cleaned up
>>> +                * with call_rcu_sched().  Let's make sure that queued
>>> work is
>>> +                * flushed so that we don't hit false positives looking
>>> for
>>> +                * insecure pages which are W+X.
>>> +                */
>>> +               rcu_barrier_sched();
>>>                 mark_rodata_ro();
>>>                 rodata_test();
>>>         } else
>>> diff --git a/kernel/module.c b/kernel/module.c
>>> index 2612f760df84..0da7f3468350 100644
>>> --- a/kernel/module.c
>>> +++ b/kernel/module.c
>>> @@ -3517,6 +3517,11 @@ static noinline int do_init_module(struct module
>>> *mod)
>>>          * walking this with preempt disabled.  In all the failure paths,
>>> we
>>>          * call synchronize_sched(), but we don't want to slow down the
>>> success
>>>          * path, so use actual RCU here.
>>> +        * Note that module_alloc() on most architectures creates W+X
>>> page
>>> +        * mappings which won't be cleaned up until do_free_init() runs.
>>> Any
>>> +        * code such as mark_rodata_ro() which depends on those mappings
>>> to
>>> +        * be cleaned up needs to sync with the queued work - ie
>>> +        * rcu_barrier_sched()
>>>          */
>>>         call_rcu_sched(&freeinit->rcu, do_free_init);
>>>         mutex_unlock(&module_mutex);
>>
>> Hi Manoj,
>>
>> The patch says that it fixes e1a58320a38d.  This commit is in mainline
>> as of v4.4-rc1.  However, this SRU request is only for Artful and
>> Bionic.  You may also want to investigate to see if it's needed in
>> Xenial.  If it is, the patch you submitted does not apply to Xenial and
>> you would need to submit a separate patch/SRU request that is specific
>> to Xenial.
>
>
> That is correct. At this time we are only interested in fixing it in Artful
> (Xenial linux-hwe) and Bionic, and apply to Cosmic (if applicable). The
> platforms we are interested in were certified with Xenial and linux-hwe.

Manoj,

ThunderX CRBs were certified with xenial GA - we need to fix 4.4 too,
if applicable.
(Good catch Joseph).

  -dann

>>
>> For A and B, this patch applies and builds cleanly.  It fixes a specific
>> bug, so:
>>
>> Acked-by: Joseph Salisbury <[hidden email]>
>>
>>
>>
>
> --
> ============================
> Manoj Iyer
> Ubuntu/Canonical
> ARM Servers - Cloud
> ============================
> --
> kernel-team mailing list
> [hidden email]
> https://lists.ubuntu.com/mailman/listinfo/kernel-team
>

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

Re: ACK/cmnt: [PATCH] init: fix false positives in W+X checking

Manoj Iyer
On Thu, 10 May 2018, dann frazier wrote:

> On Wed, May 9, 2018 at 2:32 PM, Manoj Iyer <[hidden email]> wrote:
>> On Wed, 9 May 2018, Joseph Salisbury wrote:
>>
>>> On 05/08/2018 12:24 PM, Manoj Iyer wrote:
>>>>
>>>> From: Jeffrey Hugo <[hidden email]>
>>>>
>>>> load_module() creates W+X mappings via __vmalloc_node_range() (from
>>>> layout_and_allocate()->move_module()->module_alloc()) by using
>>>> PAGE_KERNEL_EXEC.  These mappings are later cleaned up via
>>>> "call_rcu_sched(&freeinit->rcu, do_free_init)" from do_init_module().
>>>>
>>>> This is a problem because call_rcu_sched() queues work, which can be run
>>>> after debug_checkwx() is run, resulting in a race condition.  If hit, the
>>>> race results in a nasty splat about insecure W+X mappings, which results
>>>> in a poor user experience as these are not the mappings that
>>>> debug_checkwx() is intended to catch.
>>>>
>>>> This issue is observed on multiple arm64 platforms, and has been
>>>> artificially triggered on an x86 platform.
>>>>
>>>> Address the race by flushing the queued work before running the
>>>> arch-defined mark_rodata_ro() which then calls debug_checkwx().
>>>>
>>>> BugLink: https://launchpad.net/bugs/1769696
>>>>
>>>> Link:
>>>> http://lkml.kernel.org/r/1525103946-29526-1-git-send-email-jhugo@...
>>>> Fixes: e1a58320a38d ("x86/mm: Warn on W^X mappings")
>>>> Signed-off-by: Jeffrey Hugo <[hidden email]>
>>>> Reported-by: Timur Tabi <[hidden email]>
>>>> Reported-by: Jan Glauber <[hidden email]>
>>>> Acked-by: Kees Cook <[hidden email]>
>>>> Acked-by: Ingo Molnar <[hidden email]>
>>>> Acked-by: Will Deacon <[hidden email]>
>>>> Acked-by: Laura Abbott <[hidden email]>
>>>> Cc: Mark Rutland <[hidden email]>
>>>> Cc: Ard Biesheuvel <[hidden email]>
>>>> Cc: Catalin Marinas <[hidden email]>
>>>> Cc: Stephen Smalley <[hidden email]>
>>>> Cc: Thomas Gleixner <[hidden email]>
>>>> Cc: Peter Zijlstra <[hidden email]>
>>>> Signed-off-by: Andrew Morton <[hidden email]>
>>>> Signed-off-by: Stephen Rothwell <[hidden email]>
>>>> (cherry picked from commit 65d313ee1a7d41611b8ee6063db53bc976db5ba2
>>>> linux-next)
>>>> Signed-off-by: Manoj Iyer <[hidden email]>
>>>> ---
>>>>  init/main.c     | 7 +++++++
>>>>  kernel/module.c | 5 +++++
>>>>  2 files changed, 12 insertions(+)
>>>>
>>>> diff --git a/init/main.c b/init/main.c
>>>> index b8b121c17ff1..44f88af9b191 100644
>>>> --- a/init/main.c
>>>> +++ b/init/main.c
>>>> @@ -980,6 +980,13 @@ __setup("rodata=", set_debug_rodata);
>>>>  static void mark_readonly(void)
>>>>  {
>>>>         if (rodata_enabled) {
>>>> +               /*
>>>> +                * load_module() results in W+X mappings, which are
>>>> cleaned up
>>>> +                * with call_rcu_sched().  Let's make sure that queued
>>>> work is
>>>> +                * flushed so that we don't hit false positives looking
>>>> for
>>>> +                * insecure pages which are W+X.
>>>> +                */
>>>> +               rcu_barrier_sched();
>>>>                 mark_rodata_ro();
>>>>                 rodata_test();
>>>>         } else
>>>> diff --git a/kernel/module.c b/kernel/module.c
>>>> index 2612f760df84..0da7f3468350 100644
>>>> --- a/kernel/module.c
>>>> +++ b/kernel/module.c
>>>> @@ -3517,6 +3517,11 @@ static noinline int do_init_module(struct module
>>>> *mod)
>>>>          * walking this with preempt disabled.  In all the failure paths,
>>>> we
>>>>          * call synchronize_sched(), but we don't want to slow down the
>>>> success
>>>>          * path, so use actual RCU here.
>>>> +        * Note that module_alloc() on most architectures creates W+X
>>>> page
>>>> +        * mappings which won't be cleaned up until do_free_init() runs.
>>>> Any
>>>> +        * code such as mark_rodata_ro() which depends on those mappings
>>>> to
>>>> +        * be cleaned up needs to sync with the queued work - ie
>>>> +        * rcu_barrier_sched()
>>>>          */
>>>>         call_rcu_sched(&freeinit->rcu, do_free_init);
>>>>         mutex_unlock(&module_mutex);
>>>
>>> Hi Manoj,
>>>
>>> The patch says that it fixes e1a58320a38d.  This commit is in mainline
>>> as of v4.4-rc1.  However, this SRU request is only for Artful and
>>> Bionic.  You may also want to investigate to see if it's needed in
>>> Xenial.  If it is, the patch you submitted does not apply to Xenial and
>>> you would need to submit a separate patch/SRU request that is specific
>>> to Xenial.
>>
>>
>> That is correct. At this time we are only interested in fixing it in Artful
>> (Xenial linux-hwe) and Bionic, and apply to Cosmic (if applicable). The
>> platforms we are interested in were certified with Xenial and linux-hwe.
>
> Manoj,
>
> ThunderX CRBs were certified with xenial GA - we need to fix 4.4 too,
> if applicable.
> (Good catch Joseph).
>

ouch.. Joe, will send you a backport to xenial soon.

>  -dann
>
>>>
>>> For A and B, this patch applies and builds cleanly.  It fixes a specific
>>> bug, so:
>>>
>>> Acked-by: Joseph Salisbury <[hidden email]>
>>>
>>>
>>>
>>
>> --
>> ============================
>> Manoj Iyer
>> Ubuntu/Canonical
>> ARM Servers - Cloud
>> ============================
>> --
>> kernel-team mailing list
>> [hidden email]
>> https://lists.ubuntu.com/mailman/listinfo/kernel-team
>>
>
>

--
============================
Manoj Iyer
Ubuntu/Canonical
ARM Servers - Cloud
============================

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

Re: ACK/cmnt: [PATCH] init: fix false positives in W+X checking

dann frazier-4
On Fri, May 11, 2018 at 9:08 AM, Manoj Iyer <[hidden email]> wrote:

> On Thu, 10 May 2018, dann frazier wrote:
>
>> On Wed, May 9, 2018 at 2:32 PM, Manoj Iyer <[hidden email]>
>> wrote:
>>>
>>> On Wed, 9 May 2018, Joseph Salisbury wrote:
>>>
>>>> On 05/08/2018 12:24 PM, Manoj Iyer wrote:
>>>>>
>>>>>
>>>>> From: Jeffrey Hugo <[hidden email]>
>>>>>
>>>>> load_module() creates W+X mappings via __vmalloc_node_range() (from
>>>>> layout_and_allocate()->move_module()->module_alloc()) by using
>>>>> PAGE_KERNEL_EXEC.  These mappings are later cleaned up via
>>>>> "call_rcu_sched(&freeinit->rcu, do_free_init)" from do_init_module().
>>>>>
>>>>> This is a problem because call_rcu_sched() queues work, which can be
>>>>> run
>>>>> after debug_checkwx() is run, resulting in a race condition.  If hit,
>>>>> the
>>>>> race results in a nasty splat about insecure W+X mappings, which
>>>>> results
>>>>> in a poor user experience as these are not the mappings that
>>>>> debug_checkwx() is intended to catch.
>>>>>
>>>>> This issue is observed on multiple arm64 platforms, and has been
>>>>> artificially triggered on an x86 platform.
>>>>>
>>>>> Address the race by flushing the queued work before running the
>>>>> arch-defined mark_rodata_ro() which then calls debug_checkwx().
>>>>>
>>>>> BugLink: https://launchpad.net/bugs/1769696
>>>>>
>>>>> Link:
>>>>>
>>>>> http://lkml.kernel.org/r/1525103946-29526-1-git-send-email-jhugo@...
>>>>> Fixes: e1a58320a38d ("x86/mm: Warn on W^X mappings")
>>>>> Signed-off-by: Jeffrey Hugo <[hidden email]>
>>>>> Reported-by: Timur Tabi <[hidden email]>
>>>>> Reported-by: Jan Glauber <[hidden email]>
>>>>> Acked-by: Kees Cook <[hidden email]>
>>>>> Acked-by: Ingo Molnar <[hidden email]>
>>>>> Acked-by: Will Deacon <[hidden email]>
>>>>> Acked-by: Laura Abbott <[hidden email]>
>>>>> Cc: Mark Rutland <[hidden email]>
>>>>> Cc: Ard Biesheuvel <[hidden email]>
>>>>> Cc: Catalin Marinas <[hidden email]>
>>>>> Cc: Stephen Smalley <[hidden email]>
>>>>> Cc: Thomas Gleixner <[hidden email]>
>>>>> Cc: Peter Zijlstra <[hidden email]>
>>>>> Signed-off-by: Andrew Morton <[hidden email]>
>>>>> Signed-off-by: Stephen Rothwell <[hidden email]>
>>>>> (cherry picked from commit 65d313ee1a7d41611b8ee6063db53bc976db5ba2
>>>>> linux-next)
>>>>> Signed-off-by: Manoj Iyer <[hidden email]>
>>>>> ---
>>>>>  init/main.c     | 7 +++++++
>>>>>  kernel/module.c | 5 +++++
>>>>>  2 files changed, 12 insertions(+)
>>>>>
>>>>> diff --git a/init/main.c b/init/main.c
>>>>> index b8b121c17ff1..44f88af9b191 100644
>>>>> --- a/init/main.c
>>>>> +++ b/init/main.c
>>>>> @@ -980,6 +980,13 @@ __setup("rodata=", set_debug_rodata);
>>>>>  static void mark_readonly(void)
>>>>>  {
>>>>>         if (rodata_enabled) {
>>>>> +               /*
>>>>> +                * load_module() results in W+X mappings, which are
>>>>> cleaned up
>>>>> +                * with call_rcu_sched().  Let's make sure that queued
>>>>> work is
>>>>> +                * flushed so that we don't hit false positives looking
>>>>> for
>>>>> +                * insecure pages which are W+X.
>>>>> +                */
>>>>> +               rcu_barrier_sched();
>>>>>                 mark_rodata_ro();
>>>>>                 rodata_test();
>>>>>         } else
>>>>> diff --git a/kernel/module.c b/kernel/module.c
>>>>> index 2612f760df84..0da7f3468350 100644
>>>>> --- a/kernel/module.c
>>>>> +++ b/kernel/module.c
>>>>> @@ -3517,6 +3517,11 @@ static noinline int do_init_module(struct module
>>>>> *mod)
>>>>>          * walking this with preempt disabled.  In all the failure
>>>>> paths,
>>>>> we
>>>>>          * call synchronize_sched(), but we don't want to slow down the
>>>>> success
>>>>>          * path, so use actual RCU here.
>>>>> +        * Note that module_alloc() on most architectures creates W+X
>>>>> page
>>>>> +        * mappings which won't be cleaned up until do_free_init()
>>>>> runs.
>>>>> Any
>>>>> +        * code such as mark_rodata_ro() which depends on those
>>>>> mappings
>>>>> to
>>>>> +        * be cleaned up needs to sync with the queued work - ie
>>>>> +        * rcu_barrier_sched()
>>>>>          */
>>>>>         call_rcu_sched(&freeinit->rcu, do_free_init);
>>>>>         mutex_unlock(&module_mutex);
>>>>
>>>>
>>>> Hi Manoj,
>>>>
>>>> The patch says that it fixes e1a58320a38d.  This commit is in mainline
>>>> as of v4.4-rc1.  However, this SRU request is only for Artful and
>>>> Bionic.  You may also want to investigate to see if it's needed in
>>>> Xenial.  If it is, the patch you submitted does not apply to Xenial and
>>>> you would need to submit a separate patch/SRU request that is specific
>>>> to Xenial.
>>>
>>>
>>>
>>> That is correct. At this time we are only interested in fixing it in
>>> Artful
>>> (Xenial linux-hwe) and Bionic, and apply to Cosmic (if applicable). The
>>> platforms we are interested in were certified with Xenial and linux-hwe.
>>
>>
>> Manoj,
>>
>> ThunderX CRBs were certified with xenial GA - we need to fix 4.4 too,
>> if applicable.
>> (Good catch Joseph).
>>
>
> ouch.. Joe, will send you a backport to xenial soon.

btw, this is now in Linus' tree:

commit ae646f0b9ca135b87bc73ff606ef996c3029780a
Author: Jeffrey Hugo <[hidden email]>
Date:   Fri May 11 16:01:42 2018 -0700

    init: fix false positives in W+X checking

I suggest cherry-picking that one instead for the various releases so
that we're referencing the upstream commit hash.

  -dann

>
>>  -dann
>>
>>>>
>>>> For A and B, this patch applies and builds cleanly.  It fixes a specific
>>>> bug, so:
>>>>
>>>> Acked-by: Joseph Salisbury <[hidden email]>
>>>>
>>>>
>>>>
>>>
>>> --
>>> ============================
>>> Manoj Iyer
>>> Ubuntu/Canonical
>>> ARM Servers - Cloud
>>> ============================
>>> --
>>> kernel-team mailing list
>>> [hidden email]
>>> https://lists.ubuntu.com/mailman/listinfo/kernel-team
>>>
>>
>>
>
> --
> ============================
> Manoj Iyer
> Ubuntu/Canonical
> ARM Servers - Cloud
> ============================

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

Re: ACK/cmnt: [PATCH] init: fix false positives in W+X checking

Manoj Iyer
On Tue, 15 May 2018, dann frazier wrote:

> On Fri, May 11, 2018 at 9:08 AM, Manoj Iyer <[hidden email]> wrote:
>> On Thu, 10 May 2018, dann frazier wrote:
>>
>>> On Wed, May 9, 2018 at 2:32 PM, Manoj Iyer <[hidden email]>
>>> wrote:
>>>>
>>>> On Wed, 9 May 2018, Joseph Salisbury wrote:
>>>>
>>>>> On 05/08/2018 12:24 PM, Manoj Iyer wrote:
>>>>>>
>>>>>>
>>>>>> From: Jeffrey Hugo <[hidden email]>
>>>>>>
>>>>>> load_module() creates W+X mappings via __vmalloc_node_range() (from
>>>>>> layout_and_allocate()->move_module()->module_alloc()) by using
>>>>>> PAGE_KERNEL_EXEC.  These mappings are later cleaned up via
>>>>>> "call_rcu_sched(&freeinit->rcu, do_free_init)" from do_init_module().
>>>>>>
>>>>>> This is a problem because call_rcu_sched() queues work, which can be
>>>>>> run
>>>>>> after debug_checkwx() is run, resulting in a race condition.  If hit,
>>>>>> the
>>>>>> race results in a nasty splat about insecure W+X mappings, which
>>>>>> results
>>>>>> in a poor user experience as these are not the mappings that
>>>>>> debug_checkwx() is intended to catch.
>>>>>>
>>>>>> This issue is observed on multiple arm64 platforms, and has been
>>>>>> artificially triggered on an x86 platform.
>>>>>>
>>>>>> Address the race by flushing the queued work before running the
>>>>>> arch-defined mark_rodata_ro() which then calls debug_checkwx().
>>>>>>
>>>>>> BugLink: https://launchpad.net/bugs/1769696
>>>>>>
>>>>>> Link:
>>>>>>
>>>>>> http://lkml.kernel.org/r/1525103946-29526-1-git-send-email-jhugo@...
>>>>>> Fixes: e1a58320a38d ("x86/mm: Warn on W^X mappings")
>>>>>> Signed-off-by: Jeffrey Hugo <[hidden email]>
>>>>>> Reported-by: Timur Tabi <[hidden email]>
>>>>>> Reported-by: Jan Glauber <[hidden email]>
>>>>>> Acked-by: Kees Cook <[hidden email]>
>>>>>> Acked-by: Ingo Molnar <[hidden email]>
>>>>>> Acked-by: Will Deacon <[hidden email]>
>>>>>> Acked-by: Laura Abbott <[hidden email]>
>>>>>> Cc: Mark Rutland <[hidden email]>
>>>>>> Cc: Ard Biesheuvel <[hidden email]>
>>>>>> Cc: Catalin Marinas <[hidden email]>
>>>>>> Cc: Stephen Smalley <[hidden email]>
>>>>>> Cc: Thomas Gleixner <[hidden email]>
>>>>>> Cc: Peter Zijlstra <[hidden email]>
>>>>>> Signed-off-by: Andrew Morton <[hidden email]>
>>>>>> Signed-off-by: Stephen Rothwell <[hidden email]>
>>>>>> (cherry picked from commit 65d313ee1a7d41611b8ee6063db53bc976db5ba2
>>>>>> linux-next)
>>>>>> Signed-off-by: Manoj Iyer <[hidden email]>
>>>>>> ---
>>>>>>  init/main.c     | 7 +++++++
>>>>>>  kernel/module.c | 5 +++++
>>>>>>  2 files changed, 12 insertions(+)
>>>>>>
>>>>>> diff --git a/init/main.c b/init/main.c
>>>>>> index b8b121c17ff1..44f88af9b191 100644
>>>>>> --- a/init/main.c
>>>>>> +++ b/init/main.c
>>>>>> @@ -980,6 +980,13 @@ __setup("rodata=", set_debug_rodata);
>>>>>>  static void mark_readonly(void)
>>>>>>  {
>>>>>>         if (rodata_enabled) {
>>>>>> +               /*
>>>>>> +                * load_module() results in W+X mappings, which are
>>>>>> cleaned up
>>>>>> +                * with call_rcu_sched().  Let's make sure that queued
>>>>>> work is
>>>>>> +                * flushed so that we don't hit false positives looking
>>>>>> for
>>>>>> +                * insecure pages which are W+X.
>>>>>> +                */
>>>>>> +               rcu_barrier_sched();
>>>>>>                 mark_rodata_ro();
>>>>>>                 rodata_test();
>>>>>>         } else
>>>>>> diff --git a/kernel/module.c b/kernel/module.c
>>>>>> index 2612f760df84..0da7f3468350 100644
>>>>>> --- a/kernel/module.c
>>>>>> +++ b/kernel/module.c
>>>>>> @@ -3517,6 +3517,11 @@ static noinline int do_init_module(struct module
>>>>>> *mod)
>>>>>>          * walking this with preempt disabled.  In all the failure
>>>>>> paths,
>>>>>> we
>>>>>>          * call synchronize_sched(), but we don't want to slow down the
>>>>>> success
>>>>>>          * path, so use actual RCU here.
>>>>>> +        * Note that module_alloc() on most architectures creates W+X
>>>>>> page
>>>>>> +        * mappings which won't be cleaned up until do_free_init()
>>>>>> runs.
>>>>>> Any
>>>>>> +        * code such as mark_rodata_ro() which depends on those
>>>>>> mappings
>>>>>> to
>>>>>> +        * be cleaned up needs to sync with the queued work - ie
>>>>>> +        * rcu_barrier_sched()
>>>>>>          */
>>>>>>         call_rcu_sched(&freeinit->rcu, do_free_init);
>>>>>>         mutex_unlock(&module_mutex);
>>>>>
>>>>>
>>>>> Hi Manoj,
>>>>>
>>>>> The patch says that it fixes e1a58320a38d.  This commit is in mainline
>>>>> as of v4.4-rc1.  However, this SRU request is only for Artful and
>>>>> Bionic.  You may also want to investigate to see if it's needed in
>>>>> Xenial.  If it is, the patch you submitted does not apply to Xenial and
>>>>> you would need to submit a separate patch/SRU request that is specific
>>>>> to Xenial.
>>>>
>>>>
>>>>
>>>> That is correct. At this time we are only interested in fixing it in
>>>> Artful
>>>> (Xenial linux-hwe) and Bionic, and apply to Cosmic (if applicable). The
>>>> platforms we are interested in were certified with Xenial and linux-hwe.
>>>
>>>
>>> Manoj,
>>>
>>> ThunderX CRBs were certified with xenial GA - we need to fix 4.4 too,
>>> if applicable.
>>> (Good catch Joseph).
>>>
>>
>> ouch.. Joe, will send you a backport to xenial soon.
>
> btw, this is now in Linus' tree:
>
> commit ae646f0b9ca135b87bc73ff606ef996c3029780a
> Author: Jeffrey Hugo <[hidden email]>
> Date:   Fri May 11 16:01:42 2018 -0700
>
>    init: fix false positives in W+X checking
>
> I suggest cherry-picking that one instead for the various releases so
> that we're referencing the upstream commit hash.
>
>  -dann


I am working on the cherry-picks to xenial, artful and bionic will submit
those patches asap.

>
>>
>>>  -dann
>>>
>>>>>
>>>>> For A and B, this patch applies and builds cleanly.  It fixes a specific
>>>>> bug, so:
>>>>>
>>>>> Acked-by: Joseph Salisbury <[hidden email]>
>>>>>
>>>>>
>>>>>
>>>>
>>>> --
>>>> ============================
>>>> Manoj Iyer
>>>> Ubuntu/Canonical
>>>> ARM Servers - Cloud
>>>> ============================
>>>> --
>>>> kernel-team mailing list
>>>> [hidden email]
>>>> https://lists.ubuntu.com/mailman/listinfo/kernel-team
>>>>
>>>
>>>
>>
>> --
>> ============================
>> Manoj Iyer
>> Ubuntu/Canonical
>> ARM Servers - Cloud
>> ============================
>
>

--
============================
Manoj Iyer
Ubuntu/Canonical
ARM Servers - Cloud
============================

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

NACK: [SRU][Bionic/Artful] fix false positives in W+X checking

Kleber Souza
In reply to this post by Manoj Iyer
On 05/08/18 18:24, Manoj Iyer wrote:

> Please consider this patch to Bionic, Artful and apply to Cosmic. On
> ARM64 system from Cavium and Qualcomm we see random false positive
> warning messages wrt W+X checking.  "arm64/mm: Found insecure W+X mapping at address 0000000000a99000/0xa99000" while booting.
>
> A kernel with the upstream patch is avaliable in ppa:manjo/lp1769696,
> the patch was cleanly cherry-picked from linux-next on to bionic and
> also cleanly applies to Artful. I tested the kernel on a QTI QDF2400 and
> Cavium ThunderX system. Since we cannot reliably and consistently
> reproduce the warning, I did not see the warning after doing repeated
> reboots with stock bionic.
>
>
>

A v2 of this patch has been sent.


Thanks,
Kleber

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team