[PATCH][X/Z/A] CVE fixes for eBPF

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
12 messages Options
Reply | Threaded
Open this post in threaded view
|

[PATCH][X/Z/A] CVE fixes for eBPF

Seth Forshee
The following patches fix problems with eBPF in xenial, zesty, and
artful, assigned the following CVE numbers:

 CVE-2017-16995
 CVE-2017-17862
 CVE-2017-17863
 CVE-2017-17864

These are taken from a larger set of vulnerabilities, several of which
were introduced only in 4.14. Only the first two fixes are applicable to
xenial, with some extra backporting to make the fixes compatible. The
patches for CVE-2017-17863 and CVE-2017-17864 are based on the fixes
from 4.9 stable and debian respectively as the code has diverged
sisignificatnly upstream.

Thanks,
Seth

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[PATCH 1/2][X] bpf: fix incorrect sign extension in check_alu_op()

Seth Forshee
From: Jann Horn <[hidden email]>

[ Upstream commit 95a762e2c8c942780948091f8f2a4f32fce1ac6f ]

Distinguish between
BPF_ALU64|BPF_MOV|BPF_K (load 32-bit immediate, sign-extended to 64-bit)
and BPF_ALU|BPF_MOV|BPF_K (load 32-bit immediate, zero-padded to 64-bit);
only perform sign extension in the first case.

Starting with v4.14, this is exploitable by unprivileged users as long as
the unprivileged_bpf_disabled sysctl isn't set.

Debian assigned CVE-2017-16995 for this issue.

v3:
 - add CVE number (Ben Hutchings)

Fixes: 484611357c19 ("bpf: allow access into map value arrays")
Signed-off-by: Jann Horn <[hidden email]>
Acked-by: Edward Cree <[hidden email]>
Signed-off-by: Alexei Starovoitov <[hidden email]>
Signed-off-by: Daniel Borkmann <[hidden email]>
CVE-2017-16995
[ saf: Backport to 4.4. Include partial backports of 4923ec0b10d9 ("bpf:
  simplify verifier register state assignments") and 969bf05eb3ce ("bpf:
  direct packet access") to extend reg_state.imm to 64-bit. ]
Signed-off-by: Seth Forshee <[hidden email]>
---
 kernel/bpf/verifier.c | 24 +++++++++++++++++++-----
 1 file changed, 19 insertions(+), 5 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index eb759f5008b8..74ff7218caa0 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -142,7 +142,7 @@ struct reg_state {
  enum bpf_reg_type type;
  union {
  /* valid when type == CONST_IMM | PTR_TO_STACK */
- int imm;
+ s64 imm;
 
  /* valid when type == CONST_PTR_TO_MAP | PTR_TO_MAP_VALUE |
  *   PTR_TO_MAP_VALUE_OR_NULL
@@ -250,7 +250,7 @@ static void print_verifier_state(struct verifier_env *env)
  continue;
  verbose(" R%d=%s", i, reg_type_str[t]);
  if (t == CONST_IMM || t == PTR_TO_STACK)
- verbose("%d", env->cur_state.regs[i].imm);
+ verbose("%lld", env->cur_state.regs[i].imm);
  else if (t == CONST_PTR_TO_MAP || t == PTR_TO_MAP_VALUE ||
  t == PTR_TO_MAP_VALUE_OR_NULL)
  verbose("(ks=%d,vs=%d)",
@@ -478,7 +478,6 @@ static void init_reg_state(struct reg_state *regs)
  for (i = 0; i < MAX_BPF_REG; i++) {
  regs[i].type = NOT_INIT;
  regs[i].imm = 0;
- regs[i].map_ptr = NULL;
  }
 
  /* frame pointer */
@@ -493,7 +492,6 @@ static void mark_reg_unknown_value(struct reg_state *regs, u32 regno)
  BUG_ON(regno >= MAX_BPF_REG);
  regs[regno].type = UNKNOWN_VALUE;
  regs[regno].imm = 0;
- regs[regno].map_ptr = NULL;
 }
 
 enum reg_arg_type {
@@ -741,6 +739,15 @@ static int check_mem_access(struct verifier_env *env, u32 regno, int off,
  regno, reg_type_str[state->regs[regno].type]);
  return -EACCES;
  }
+
+ if (!err && size <= 2 && value_regno >= 0 && env->allow_ptr_leaks &&
+    state->regs[value_regno].type == UNKNOWN_VALUE) {
+ /* 1 or 2 byte load zero-extends, determine the number of
+ * zero upper bits. Not doing it fo 4 byte load, since
+ * such values cannot be added to ptr_to_packet anyway.
+ */
+ state->regs[value_regno].imm = 64 - size * 8;
+ }
  return err;
 }
 
@@ -1110,8 +1117,15 @@ static int check_alu_op(struct verifier_env *env, struct bpf_insn *insn)
  /* case: R = imm
  * remember the value we stored into this reg
  */
+ u64 imm;
+
+ if (BPF_CLASS(insn->code) == BPF_ALU64)
+ imm = insn->imm;
+ else
+ imm = (u32)insn->imm;
+
  regs[insn->dst_reg].type = CONST_IMM;
- regs[insn->dst_reg].imm = insn->imm;
+ regs[insn->dst_reg].imm = imm;
  }
 
  } else if (opcode > BPF_END) {
--
2.14.1


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[PATCH 2/2][X] bpf: fix branch pruning logic

Seth Forshee
In reply to this post by Seth Forshee
From: Alexei Starovoitov <[hidden email]>

when the verifier detects that register contains a runtime constant
and it's compared with another constant it will prune exploration
of the branch that is guaranteed not to be taken at runtime.
This is all correct, but malicious program may be constructed
in such a way that it always has a constant comparison and
the other branch is never taken under any conditions.
In this case such path through the program will not be explored
by the verifier. It won't be taken at run-time either, but since
all instructions are JITed the malicious program may cause JITs
to complain about using reserved fields, etc.
To fix the issue we have to track the instructions explored by
the verifier and sanitize instructions that are dead at run time
with NOPs. We cannot reject such dead code, since llvm generates
it for valid C code, since it doesn't do as much data flow
analysis as the verifier does.

Fixes: 17a5267067f3 ("bpf: verifier (add verifier core)")
Signed-off-by: Alexei Starovoitov <[hidden email]>
Acked-by: Daniel Borkmann <[hidden email]>
Signed-off-by: Daniel Borkmann <[hidden email]>
(backported from commit c131187db2d3fa2f8bf32fdf4e9a4ef805168467)
CVE-2017-17862
[ saf: Add partial backport of 3df126f35f88d ("bpf: don't (ab)use
  instructions to store state") to add bpf_insn_aux_data state to
  verifier_env ]
Signed-off-by: Seth Forshee <[hidden email]>
---
 kernel/bpf/verifier.c | 44 ++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 40 insertions(+), 4 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 74ff7218caa0..25c39cab78c0 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -186,6 +186,10 @@ struct verifier_stack_elem {
  struct verifier_stack_elem *next;
 };
 
+struct bpf_insn_aux_data {
+ bool seen; /* this insn was processed by the verifier */
+};
+
 #define MAX_USED_MAPS 64 /* max number of maps accessed by one eBPF program */
 
 /* single container for all structs
@@ -200,6 +204,7 @@ struct verifier_env {
  struct bpf_map *used_maps[MAX_USED_MAPS]; /* array of map's used by eBPF program */
  u32 used_map_cnt; /* number of used maps */
  bool allow_ptr_leaks;
+ struct bpf_insn_aux_data *insn_aux_data; /* array of per-insn state */
 };
 
 /* verbose verifier prints what it's seeing
@@ -1792,6 +1797,7 @@ static int do_check(struct verifier_env *env)
  print_bpf_insn(env, insn);
  }
 
+ env->insn_aux_data[insn_idx].seen = true;
  if (class == BPF_ALU || class == BPF_ALU64) {
  err = check_alu_op(env, insn);
  if (err)
@@ -1824,6 +1830,7 @@ static int do_check(struct verifier_env *env)
 
  if (BPF_SIZE(insn->code) != BPF_W) {
  insn_idx++;
+ env->insn_aux_data[insn_idx].seen = true;
  continue;
  }
 
@@ -2132,6 +2139,25 @@ static void adjust_branches(struct bpf_prog *prog, int pos, int delta)
  }
 }
 
+/* The verifier does more data flow analysis than llvm and will not explore
+ * branches that are dead at run time. Malicious programs can have dead code
+ * too. Therefore replace all dead at-run-time code with nops.
+ */
+static void sanitize_dead_code(struct verifier_env *env)
+{
+ struct bpf_insn_aux_data *aux_data = env->insn_aux_data;
+ struct bpf_insn nop = BPF_MOV64_REG(BPF_REG_0, BPF_REG_0);
+ struct bpf_insn *insn = env->prog->insnsi;
+ const int insn_cnt = env->prog->len;
+ int i;
+
+ for (i = 0; i < insn_cnt; i++) {
+ if (aux_data[i].seen)
+ continue;
+ memcpy(insn + i, &nop, sizeof(nop));
+ }
+}
+
 /* convert load instructions that access fields of 'struct __sk_buff'
  * into sequence of instructions that access fields of 'struct sk_buff'
  */
@@ -2241,6 +2267,11 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr)
  if (!env)
  return -ENOMEM;
 
+ env->insn_aux_data = vzalloc(sizeof(struct bpf_insn_aux_data) *
+     (*prog)->len);
+ ret = -ENOMEM;
+ if (!env->insn_aux_data)
+ goto err_free_env;
  env->prog = *prog;
 
  /* grab the mutex to protect few globals used by verifier */
@@ -2259,12 +2290,12 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr)
  /* log_* values have to be sane */
  if (log_size < 128 || log_size > UINT_MAX >> 8 ||
     log_level == 0 || log_ubuf == NULL)
- goto free_env;
+ goto err_unlock;
 
  ret = -ENOMEM;
  log_buf = vmalloc(log_size);
  if (!log_buf)
- goto free_env;
+ goto err_unlock;
  } else {
  log_level = 0;
  }
@@ -2292,6 +2323,9 @@ skip_full_check:
  while (pop_stack(env, NULL) >= 0);
  free_states(env);
 
+ if (ret == 0)
+ sanitize_dead_code(env);
+
  if (ret == 0)
  /* program is valid, convert *(u32*)(ctx + off) accesses */
  ret = convert_ctx_accesses(env);
@@ -2333,14 +2367,16 @@ skip_full_check:
 free_log_buf:
  if (log_level)
  vfree(log_buf);
-free_env:
  if (!env->prog->aux->used_maps)
  /* if we didn't copy map pointers into bpf_prog_info, release
  * them now. Otherwise free_bpf_prog_info() will release them.
  */
  release_maps(env);
  *prog = env->prog;
- kfree(env);
+err_unlock:
  mutex_unlock(&bpf_verifier_lock);
+ vfree(env->insn_aux_data);
+err_free_env:
+ kfree(env);
  return ret;
 }
--
2.14.1


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[PATCH 1/4][Z] UBUNTU: SAUCE: bpf: reject out-of-bounds stack pointer calculation

Seth Forshee
In reply to this post by Seth Forshee
From: Jann Horn <[hidden email]>

Reject programs that compute wildly out-of-bounds stack pointers.
Otherwise, pointers can be computed with an offset that doesn't fit into an
`int`, causing security issues in the stack memory access check (as well as
signed integer overflow during offset addition).

This is a fix specifically for the v4.9 stable tree because the mainline
code looks very different at this point.

Fixes: 7bca0a9702edf ("bpf: enhance verifier to understand stack pointer arithmetic")
Signed-off-by: Jann Horn <[hidden email]>
Acked-by: Daniel Borkmann <[hidden email]>
CVE-2017-17863
Link: https://www.spinics.net/lists/stable/msg206985.html
Signed-off-by: Seth Forshee <[hidden email]>
---
 kernel/bpf/verifier.c | 22 ++++++++++++++++++++--
 1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index b54585d67c0c..b741b616e935 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1794,10 +1794,28 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
    ((BPF_SRC(insn->code) == BPF_X &&
      regs[insn->src_reg].type == CONST_IMM) ||
     BPF_SRC(insn->code) == BPF_K)) {
- if (BPF_SRC(insn->code) == BPF_X)
+ if (BPF_SRC(insn->code) == BPF_X) {
+ /* check in case the register contains a big
+ * 64-bit value
+ */
+ if (regs[insn->src_reg].imm < -MAX_BPF_STACK ||
+    regs[insn->src_reg].imm > MAX_BPF_STACK) {
+ verbose("R%d value too big in R%d pointer arithmetic\n",
+ insn->src_reg, insn->dst_reg);
+ return -EACCES;
+ }
  dst_reg->imm += regs[insn->src_reg].imm;
- else
+ } else {
+ /* safe against overflow: addition of 32-bit
+ * numbers in 64-bit representation
+ */
  dst_reg->imm += insn->imm;
+ }
+ if (dst_reg->imm > 0 || dst_reg->imm < -MAX_BPF_STACK) {
+ verbose("R%d out-of-bounds pointer arithmetic\n",
+ insn->dst_reg);
+ return -EACCES;
+ }
  return 0;
  } else if (opcode == BPF_ADD &&
    BPF_CLASS(insn->code) == BPF_ALU64 &&
--
2.14.1


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[PATCH 2/4][Z] bpf: fix incorrect sign extension in check_alu_op()

Seth Forshee
In reply to this post by Seth Forshee
From: Jann Horn <[hidden email]>

[ Upstream commit 95a762e2c8c942780948091f8f2a4f32fce1ac6f ]

Distinguish between
BPF_ALU64|BPF_MOV|BPF_K (load 32-bit immediate, sign-extended to 64-bit)
and BPF_ALU|BPF_MOV|BPF_K (load 32-bit immediate, zero-padded to 64-bit);
only perform sign extension in the first case.

Starting with v4.14, this is exploitable by unprivileged users as long as
the unprivileged_bpf_disabled sysctl isn't set.

Debian assigned CVE-2017-16995 for this issue.

v3:
 - add CVE number (Ben Hutchings)

Fixes: 484611357c19 ("bpf: allow access into map value arrays")
Signed-off-by: Jann Horn <[hidden email]>
Acked-by: Edward Cree <[hidden email]>
Signed-off-by: Alexei Starovoitov <[hidden email]>
Signed-off-by: Daniel Borkmann <[hidden email]>
CVE-2017-16995
[ saf: Backport to 4.10 ]
Signed-off-by: Seth Forshee <[hidden email]>
---
 kernel/bpf/verifier.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index b741b616e935..edc885b3c157 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1723,10 +1723,17 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
  /* case: R = imm
  * remember the value we stored into this reg
  */
+ u64 imm;
+
+ if (BPF_CLASS(insn->code) == BPF_ALU64)
+ imm = insn->imm;
+ else
+ imm = (u32)insn->imm;
+
  regs[insn->dst_reg].type = CONST_IMM;
- regs[insn->dst_reg].imm = insn->imm;
- regs[insn->dst_reg].max_value = insn->imm;
- regs[insn->dst_reg].min_value = insn->imm;
+ regs[insn->dst_reg].imm = imm;
+ regs[insn->dst_reg].max_value = imm;
+ regs[insn->dst_reg].min_value = imm;
  }
 
  } else if (opcode > BPF_END) {
--
2.14.1


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[PATCH 3/4][Z] UBUNTU: SAUCE: bpf/verifier: Fix states_equal() comparison of pointer and UNKNOWN

Seth Forshee
In reply to this post by Seth Forshee
From: Ben Hutchings <[hidden email]>

An UNKNOWN_VALUE is not supposed to be derived from a pointer, unless
pointer leaks are allowed.  Therefore, states_equal() must not treat
a state with a pointer in a register as "equal" to a state with an
UNKNOWN_VALUE in that register.

This was fixed differently upstream, but the code around here was
largely rewritten in 4.14 by commit f1174f77b50c "bpf/verifier: rework
value tracking".  The bug can be detected by the bpf/verifier sub-test
"pointer/scalar confusion in state equality check (way 1)".

Signed-off-by: Ben Hutchings <[hidden email]>
Cc: Edward Cree <[hidden email]>
Cc: Jann Horn <[hidden email]>
Cc: Alexei Starovoitov <[hidden email]>
CVE-2017-17864
Link: https://anonscm.debian.org/cgit/kernel/linux.git/tree/debian/patches/bugfix/all/bpf-verifier-fix-states_equal-comparison-of-pointer-and-unknown.patch?h=stretch-security
[ saf: port to zesty ]
Signed-off-by: Seth Forshee <[hidden email]>
---
 kernel/bpf/verifier.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index edc885b3c157..fed5b346cd18 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2591,11 +2591,12 @@ static bool states_equal(struct bpf_verifier_env *env,
 
  /* If we didn't map access then again we don't care about the
  * mismatched range values and it's ok if our old type was
- * UNKNOWN and we didn't go to a NOT_INIT'ed reg.
+ * UNKNOWN and we didn't go to a NOT_INIT'ed or pointer reg.
  */
  if (rold->type == NOT_INIT ||
     (!varlen_map_access && rold->type == UNKNOWN_VALUE &&
-     rcur->type != NOT_INIT))
+     rcur->type != NOT_INIT &&
+     !is_pointer_value(env, i)))
  continue;
 
  if (rold->type == PTR_TO_PACKET && rcur->type == PTR_TO_PACKET &&
--
2.14.1


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[PATCH 4/4][Z] bpf: fix branch pruning logic

Seth Forshee
In reply to this post by Seth Forshee
From: Alexei Starovoitov <[hidden email]>

when the verifier detects that register contains a runtime constant
and it's compared with another constant it will prune exploration
of the branch that is guaranteed not to be taken at runtime.
This is all correct, but malicious program may be constructed
in such a way that it always has a constant comparison and
the other branch is never taken under any conditions.
In this case such path through the program will not be explored
by the verifier. It won't be taken at run-time either, but since
all instructions are JITed the malicious program may cause JITs
to complain about using reserved fields, etc.
To fix the issue we have to track the instructions explored by
the verifier and sanitize instructions that are dead at run time
with NOPs. We cannot reject such dead code, since llvm generates
it for valid C code, since it doesn't do as much data flow
analysis as the verifier does.

Fixes: 17a5267067f3 ("bpf: verifier (add verifier core)")
Signed-off-by: Alexei Starovoitov <[hidden email]>
Acked-by: Daniel Borkmann <[hidden email]>
Signed-off-by: Daniel Borkmann <[hidden email]>
(backported from commit c131187db2d3fa2f8bf32fdf4e9a4ef805168467)
CVE-2017-17862
Signed-off-by: Seth Forshee <[hidden email]>
---
 include/linux/bpf_verifier.h |  1 +
 kernel/bpf/verifier.c        | 24 ++++++++++++++++++++++++
 2 files changed, 25 insertions(+)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index a13b031dc6b8..aeca014c52fe 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -67,6 +67,7 @@ struct bpf_verifier_state_list {
 
 struct bpf_insn_aux_data {
  enum bpf_reg_type ptr_type; /* pointer type for load/store insns */
+ bool seen; /* this insn was processed by the verifier */
 };
 
 #define MAX_USED_MAPS 64 /* max number of maps accessed by one eBPF program */
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index fed5b346cd18..d34a572b3388 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2748,6 +2748,7 @@ static int do_check(struct bpf_verifier_env *env)
  if (err)
  return err;
 
+ env->insn_aux_data[insn_idx].seen = true;
  if (class == BPF_ALU || class == BPF_ALU64) {
  err = check_alu_op(env, insn);
  if (err)
@@ -2945,6 +2946,7 @@ static int do_check(struct bpf_verifier_env *env)
  return err;
 
  insn_idx++;
+ env->insn_aux_data[insn_idx].seen = true;
  } else {
  verbose("invalid BPF_LD mode\n");
  return -EINVAL;
@@ -3100,6 +3102,25 @@ static void convert_pseudo_ld_imm64(struct bpf_verifier_env *env)
  insn->src_reg = 0;
 }
 
+/* The verifier does more data flow analysis than llvm and will not explore
+ * branches that are dead at run time. Malicious programs can have dead code
+ * too. Therefore replace all dead at-run-time code with nops.
+ */
+static void sanitize_dead_code(struct bpf_verifier_env *env)
+{
+ struct bpf_insn_aux_data *aux_data = env->insn_aux_data;
+ struct bpf_insn nop = BPF_MOV64_REG(BPF_REG_0, BPF_REG_0);
+ struct bpf_insn *insn = env->prog->insnsi;
+ const int insn_cnt = env->prog->len;
+ int i;
+
+ for (i = 0; i < insn_cnt; i++) {
+ if (aux_data[i].seen)
+ continue;
+ memcpy(insn + i, &nop, sizeof(nop));
+ }
+}
+
 /* convert load instructions that access fields of 'struct __sk_buff'
  * into sequence of instructions that access fields of 'struct sk_buff'
  */
@@ -3259,6 +3280,9 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr)
  while (pop_stack(env, NULL) >= 0);
  free_states(env);
 
+ if (ret == 0)
+ sanitize_dead_code(env);
+
  if (ret == 0)
  /* program is valid, convert *(u32*)(ctx + off) accesses */
  ret = convert_ctx_accesses(env);
--
2.14.1


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[PATCH 1/4][A] UBUNTU: SAUCE: bpf: reject out-of-bounds stack pointer calculation

Seth Forshee
In reply to this post by Seth Forshee
From: Jann Horn <[hidden email]>

Reject programs that compute wildly out-of-bounds stack pointers.
Otherwise, pointers can be computed with an offset that doesn't fit into an
`int`, causing security issues in the stack memory access check (as well as
signed integer overflow during offset addition).

This is a fix specifically for the v4.9 stable tree because the mainline
code looks very different at this point.

Fixes: 7bca0a9702edf ("bpf: enhance verifier to understand stack pointer arithmetic")
Signed-off-by: Jann Horn <[hidden email]>
Acked-by: Daniel Borkmann <[hidden email]>
CVE-2017-17863
Link: https://www.spinics.net/lists/stable/msg206985.html
Signed-off-by: Seth Forshee <[hidden email]>
---
 kernel/bpf/verifier.c | 22 ++++++++++++++++++++--
 1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 3940019b9740..4321625fe32a 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2122,10 +2122,28 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
    ((BPF_SRC(insn->code) == BPF_X &&
      regs[insn->src_reg].type == CONST_IMM) ||
     BPF_SRC(insn->code) == BPF_K)) {
- if (BPF_SRC(insn->code) == BPF_X)
+ if (BPF_SRC(insn->code) == BPF_X) {
+ /* check in case the register contains a big
+ * 64-bit value
+ */
+ if (regs[insn->src_reg].imm < -MAX_BPF_STACK ||
+    regs[insn->src_reg].imm > MAX_BPF_STACK) {
+ verbose("R%d value too big in R%d pointer arithmetic\n",
+ insn->src_reg, insn->dst_reg);
+ return -EACCES;
+ }
  dst_reg->imm += regs[insn->src_reg].imm;
- else
+ } else {
+ /* safe against overflow: addition of 32-bit
+ * numbers in 64-bit representation
+ */
  dst_reg->imm += insn->imm;
+ }
+ if (dst_reg->imm > 0 || dst_reg->imm < -MAX_BPF_STACK) {
+ verbose("R%d out-of-bounds pointer arithmetic\n",
+ insn->dst_reg);
+ return -EACCES;
+ }
  return 0;
  } else if (opcode == BPF_ADD &&
    BPF_CLASS(insn->code) == BPF_ALU64 &&
--
2.14.1


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[PATCH 2/4][A] bpf: fix incorrect sign extension in check_alu_op()

Seth Forshee
In reply to this post by Seth Forshee
From: Jann Horn <[hidden email]>

[ Upstream commit 95a762e2c8c942780948091f8f2a4f32fce1ac6f ]

Distinguish between
BPF_ALU64|BPF_MOV|BPF_K (load 32-bit immediate, sign-extended to 64-bit)
and BPF_ALU|BPF_MOV|BPF_K (load 32-bit immediate, zero-padded to 64-bit);
only perform sign extension in the first case.

Starting with v4.14, this is exploitable by unprivileged users as long as
the unprivileged_bpf_disabled sysctl isn't set.

Debian assigned CVE-2017-16995 for this issue.

v3:
 - add CVE number (Ben Hutchings)

Fixes: 484611357c19 ("bpf: allow access into map value arrays")
Signed-off-by: Jann Horn <[hidden email]>
Acked-by: Edward Cree <[hidden email]>
Signed-off-by: Alexei Starovoitov <[hidden email]>
Signed-off-by: Daniel Borkmann <[hidden email]>
CVE-2017-16995
Signed-off-by: Seth Forshee <[hidden email]>
---
 kernel/bpf/verifier.c | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 4321625fe32a..cdfa07a4ef27 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2048,12 +2048,19 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
  /* case: R = imm
  * remember the value we stored into this reg
  */
+ u64 imm;
+
+ if (BPF_CLASS(insn->code) == BPF_ALU64)
+ imm = insn->imm;
+ else
+ imm = (u32)insn->imm;
+
  regs[insn->dst_reg].type = CONST_IMM;
- regs[insn->dst_reg].imm = insn->imm;
+ regs[insn->dst_reg].imm = imm;
  regs[insn->dst_reg].id = 0;
- regs[insn->dst_reg].max_value = insn->imm;
- regs[insn->dst_reg].min_value = insn->imm;
- regs[insn->dst_reg].min_align = calc_align(insn->imm);
+ regs[insn->dst_reg].max_value = imm;
+ regs[insn->dst_reg].min_value = imm;
+ regs[insn->dst_reg].min_align = calc_align(imm);
  regs[insn->dst_reg].value_from_signed = false;
  }
 
--
2.14.1


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[PATCH 3/4][A] UBUNTU: SAUCE: bpf/verifier: Fix states_equal() comparison of pointer and UNKNOWN

Seth Forshee
In reply to this post by Seth Forshee
From: Ben Hutchings <[hidden email]>

An UNKNOWN_VALUE is not supposed to be derived from a pointer, unless
pointer leaks are allowed.  Therefore, states_equal() must not treat
a state with a pointer in a register as "equal" to a state with an
UNKNOWN_VALUE in that register.

This was fixed differently upstream, but the code around here was
largely rewritten in 4.14 by commit f1174f77b50c "bpf/verifier: rework
value tracking".  The bug can be detected by the bpf/verifier sub-test
"pointer/scalar confusion in state equality check (way 1)".

Signed-off-by: Ben Hutchings <[hidden email]>
Cc: Edward Cree <[hidden email]>
Cc: Jann Horn <[hidden email]>
Cc: Alexei Starovoitov <[hidden email]>
CVE-2017-17864
Link: https://anonscm.debian.org/cgit/kernel/linux.git/tree/debian/patches/bugfix/all/bpf-verifier-fix-states_equal-comparison-of-pointer-and-unknown.patch?h=stretch-security
Signed-off-by: Seth Forshee <[hidden email]>
---
 kernel/bpf/verifier.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index cdfa07a4ef27..4ecb2e10c5e0 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2980,11 +2980,12 @@ static bool states_equal(struct bpf_verifier_env *env,
 
  /* If we didn't map access then again we don't care about the
  * mismatched range values and it's ok if our old type was
- * UNKNOWN and we didn't go to a NOT_INIT'ed reg.
+ * UNKNOWN and we didn't go to a NOT_INIT'ed or pointer reg.
  */
  if (rold->type == NOT_INIT ||
     (!varlen_map_access && rold->type == UNKNOWN_VALUE &&
-     rcur->type != NOT_INIT))
+     rcur->type != NOT_INIT &&
+     !__is_pointer_value(env->allow_ptr_leaks, rcur)))
  continue;
 
  /* Don't care about the reg->id in this case. */
--
2.14.1


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[PATCH 4/4][A] bpf: fix branch pruning logic

Seth Forshee
In reply to this post by Seth Forshee
From: Alexei Starovoitov <[hidden email]>

when the verifier detects that register contains a runtime constant
and it's compared with another constant it will prune exploration
of the branch that is guaranteed not to be taken at runtime.
This is all correct, but malicious program may be constructed
in such a way that it always has a constant comparison and
the other branch is never taken under any conditions.
In this case such path through the program will not be explored
by the verifier. It won't be taken at run-time either, but since
all instructions are JITed the malicious program may cause JITs
to complain about using reserved fields, etc.
To fix the issue we have to track the instructions explored by
the verifier and sanitize instructions that are dead at run time
with NOPs. We cannot reject such dead code, since llvm generates
it for valid C code, since it doesn't do as much data flow
analysis as the verifier does.

Fixes: 17a5267067f3 ("bpf: verifier (add verifier core)")
Signed-off-by: Alexei Starovoitov <[hidden email]>
Acked-by: Daniel Borkmann <[hidden email]>
Signed-off-by: Daniel Borkmann <[hidden email]>
(cherry picked from commit c131187db2d3fa2f8bf32fdf4e9a4ef805168467)
CVE-2017-17862
Signed-off-by: Seth Forshee <[hidden email]>
---
 include/linux/bpf_verifier.h |  2 +-
 kernel/bpf/verifier.c        | 27 +++++++++++++++++++++++++++
 2 files changed, 28 insertions(+), 1 deletion(-)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 8e5d31f6faef..effeaa64257d 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -75,7 +75,7 @@ struct bpf_insn_aux_data {
  struct bpf_map *map_ptr; /* pointer for call insn into lookup_elem */
  };
  int ctx_field_size; /* the ctx field size for load insn, maybe 0 */
- int converted_op_size; /* the valid value width after perceived conversion */
+ bool seen; /* this insn was processed by the verifier */
 };
 
 #define MAX_USED_MAPS 64 /* max number of maps accessed by one eBPF program */
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 4ecb2e10c5e0..dab5ba668b97 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3152,6 +3152,7 @@ static int do_check(struct bpf_verifier_env *env)
  if (err)
  return err;
 
+ env->insn_aux_data[insn_idx].seen = true;
  if (class == BPF_ALU || class == BPF_ALU64) {
  err = check_alu_op(env, insn);
  if (err)
@@ -3342,6 +3343,7 @@ static int do_check(struct bpf_verifier_env *env)
  return err;
 
  insn_idx++;
+ env->insn_aux_data[insn_idx].seen = true;
  } else {
  verbose("invalid BPF_LD mode\n");
  return -EINVAL;
@@ -3523,6 +3525,7 @@ static int adjust_insn_aux_data(struct bpf_verifier_env *env, u32 prog_len,
  u32 off, u32 cnt)
 {
  struct bpf_insn_aux_data *new_data, *old_data = env->insn_aux_data;
+ int i;
 
  if (cnt == 1)
  return 0;
@@ -3532,6 +3535,8 @@ static int adjust_insn_aux_data(struct bpf_verifier_env *env, u32 prog_len,
  memcpy(new_data, old_data, sizeof(struct bpf_insn_aux_data) * off);
  memcpy(new_data + off + cnt - 1, old_data + off,
        sizeof(struct bpf_insn_aux_data) * (prog_len - off - cnt + 1));
+ for (i = off; i < off + cnt - 1; i++)
+ new_data[i].seen = true;
  env->insn_aux_data = new_data;
  vfree(old_data);
  return 0;
@@ -3550,6 +3555,25 @@ static struct bpf_prog *bpf_patch_insn_data(struct bpf_verifier_env *env, u32 of
  return new_prog;
 }
 
+/* The verifier does more data flow analysis than llvm and will not explore
+ * branches that are dead at run time. Malicious programs can have dead code
+ * too. Therefore replace all dead at-run-time code with nops.
+ */
+static void sanitize_dead_code(struct bpf_verifier_env *env)
+{
+ struct bpf_insn_aux_data *aux_data = env->insn_aux_data;
+ struct bpf_insn nop = BPF_MOV64_REG(BPF_REG_0, BPF_REG_0);
+ struct bpf_insn *insn = env->prog->insnsi;
+ const int insn_cnt = env->prog->len;
+ int i;
+
+ for (i = 0; i < insn_cnt; i++) {
+ if (aux_data[i].seen)
+ continue;
+ memcpy(insn + i, &nop, sizeof(nop));
+ }
+}
+
 /* convert load instructions that access fields of 'struct __sk_buff'
  * into sequence of instructions that access fields of 'struct sk_buff'
  */
@@ -3841,6 +3865,9 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr)
  while (pop_stack(env, NULL) >= 0);
  free_states(env);
 
+ if (ret == 0)
+ sanitize_dead_code(env);
+
  if (ret == 0)
  /* program is valid, convert *(u32*)(ctx + off) accesses */
  ret = convert_ctx_accesses(env);
--
2.14.1


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH][X/Z/A] CVE fixes for eBPF

Seth Forshee
In reply to this post by Seth Forshee
On Thu, Jan 04, 2018 at 08:01:14AM -0600, Seth Forshee wrote:

> The following patches fix problems with eBPF in xenial, zesty, and
> artful, assigned the following CVE numbers:
>
>  CVE-2017-16995
>  CVE-2017-17862
>  CVE-2017-17863
>  CVE-2017-17864
>
> These are taken from a larger set of vulnerabilities, several of which
> were introduced only in 4.14. Only the first two fixes are applicable to
> xenial, with some extra backporting to make the fixes compatible. The
> patches for CVE-2017-17863 and CVE-2017-17864 are based on the fixes
> from 4.9 stable and debian respectively as the code has diverged
> sisignificatnly upstream.

Also should have mentioned, I've tested these with all proof-of-concept
exploits and also regression tested using the kernel's bpf selftests.

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team