[F][PATCH 0/4] paes self test (LP: 1854948)

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

[F][PATCH 0/4] paes self test (LP: 1854948)

frank.heimes
Buglink: https://bugs.launchpad.net/bugs/1854948

This add a self-test to the paes cipher in the paes_s390 module.
This self test shall allow to load and use the paes cipher if the kernel FIPS flag is switched on.

Harald Freudenberger (3):
  From: Harald Freudenberger <[hidden email]>
  There have been some findings during Eric Biggers rework of the paes
    implementation which this patch tries to address:
  s390/crypto: enable clear key values for paes ciphers

Markus Elfring (1):
  From: Markus Elfring <[hidden email]>

 arch/s390/crypto/paes_s390.c         | 241 +++++++++++++++++++--------
 drivers/s390/crypto/pkey_api.c       |  88 ++++++----
 drivers/s390/crypto/zcrypt_ccamisc.h |   1 +
 3 files changed, 234 insertions(+), 96 deletions(-)

--
2.25.0


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[F][PATCH 1/4] s390/pkey: use memdup_user() to simplify code

frank.heimes
From: Markus Elfring <[hidden email]>

BugLink: https://bugs.launchpad.net/bugs/1854948

Generated by: scripts/coccinelle/api/memdup_user.cocci

Link: http://lkml.kernel.org/r/aca044e8-e4b2-eda8-d724-b08772a44ed9@...
[[hidden email]: use ==0 instead of <=0 for a size_t variable]
[[hidden email]: split bugfix into separate patch; shorten changelog]
Signed-off-by: Markus Elfring <[hidden email]>
Signed-off-by: Christian Borntraeger <[hidden email]>
Signed-off-by: Heiko Carstens <[hidden email]>
Signed-off-by: Vasily Gorbik <[hidden email]>
(cherry picked from commit 8b57e7c852fc58a62e668a83c0fa8d9246131803)
Signed-off-by: Frank Heimes <[hidden email]>
---
 drivers/s390/crypto/pkey_api.c | 28 ++++------------------------
 1 file changed, 4 insertions(+), 24 deletions(-)

diff --git a/drivers/s390/crypto/pkey_api.c b/drivers/s390/crypto/pkey_api.c
index e17fac20127e..d78d77686d7b 100644
--- a/drivers/s390/crypto/pkey_api.c
+++ b/drivers/s390/crypto/pkey_api.c
@@ -715,38 +715,18 @@ static int pkey_apqns4keytype(enum pkey_key_type ktype,
 
 static void *_copy_key_from_user(void __user *ukey, size_t keylen)
 {
- void *kkey;
-
  if (!ukey || keylen < MINKEYBLOBSIZE || keylen > KEYBLOBBUFSIZE)
  return ERR_PTR(-EINVAL);
- kkey = kmalloc(keylen, GFP_KERNEL);
- if (!kkey)
- return ERR_PTR(-ENOMEM);
- if (copy_from_user(kkey, ukey, keylen)) {
- kfree(kkey);
- return ERR_PTR(-EFAULT);
- }
 
- return kkey;
+ return memdup_user(ukey, keylen);
 }
 
 static void *_copy_apqns_from_user(void __user *uapqns, size_t nr_apqns)
 {
- void *kapqns = NULL;
- size_t nbytes;
-
- if (uapqns && nr_apqns > 0) {
- nbytes = nr_apqns * sizeof(struct pkey_apqn);
- kapqns = kmalloc(nbytes, GFP_KERNEL);
- if (!kapqns)
- return ERR_PTR(-ENOMEM);
- if (copy_from_user(kapqns, uapqns, nbytes)) {
- kfree(kapqns);
- return ERR_PTR(-EFAULT);
- }
- }
+ if (!uapqns || nr_apqns == 0)
+ return NULL;
 
- return kapqns;
+ return memdup_user(uapqns, nr_apqns * sizeof(struct pkey_apqn));
 }
 
 static long pkey_unlocked_ioctl(struct file *filp, unsigned int cmd,
--
2.25.0


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[F][PATCH 2/4] s390/pkey: Add support for key blob with clear key value

frank.heimes
In reply to this post by frank.heimes
From: Harald Freudenberger <[hidden email]>

BugLink: https://bugs.launchpad.net/bugs/1854948

This patch adds support for a new key blob format to the
pkey kernel module. The new key blob comprises a clear
key value together with key type information.

The implementation tries to derive an protected key
from the blob with the clear key value inside with
1) the PCKMO instruction. This may fail as the LPAR
   profile may disable this way.
2) Generate an CCA AES secure data key with exact the
   clear key value. This requires to have a working
   crypto card in CCA Coprocessor mode. Then derive
   an protected key from the CCA AES secure key again
   with the help of a working crypto card in CCA mode.
If both way fail, the transformation of the clear key
blob into a protected key will fail. For the PAES cipher
this would result in a failure at setkey() invocation.

A clear key value exposed in main memory is a security
risk. The intention of this new 'clear key blob' support
for pkey is to provide self-tests for the PAES cipher key
implementation. These known answer tests obviously need
to be run with well known key values. So with the clear
key blob format there is a way to provide knwon answer
tests together with an pkey clear key blob for the
in-kernel self tests done at cipher registration.

Signed-off-by: Harald Freudenberger <[hidden email]>
Signed-off-by: Vasily Gorbik <[hidden email]>
(cherry picked from commit 888edbc48857c7189592fb0be3ab09994247199c)
Signed-off-by: Frank Heimes <[hidden email]>
---
 drivers/s390/crypto/pkey_api.c       | 60 +++++++++++++++++++++++++---
 drivers/s390/crypto/zcrypt_ccamisc.h |  1 +
 2 files changed, 55 insertions(+), 6 deletions(-)

diff --git a/drivers/s390/crypto/pkey_api.c b/drivers/s390/crypto/pkey_api.c
index d78d77686d7b..3ecb6a255a0c 100644
--- a/drivers/s390/crypto/pkey_api.c
+++ b/drivers/s390/crypto/pkey_api.c
@@ -71,6 +71,17 @@ struct protaeskeytoken {
  u8  protkey[MAXPROTKEYSIZE]; /* the protected key blob */
 } __packed;
 
+/* inside view of a clear key token (type 0x00 version 0x02) */
+struct clearaeskeytoken {
+ u8  type; /* 0x00 for PAES specific key tokens */
+ u8  res0[3];
+ u8  version; /* 0x02 for clear AES key token */
+ u8  res1[3];
+ u32 keytype; /* key type, one of the PKEY_KEYTYPE values */
+ u32 len; /* bytes actually stored in clearkey[] */
+ u8  clearkey[0]; /* clear key value */
+} __packed;
+
 /*
  * Create a protected key from a clear key value.
  */
@@ -305,26 +316,63 @@ static int pkey_verifyprotkey(const struct pkey_protkey *protkey)
 static int pkey_nonccatok2pkey(const u8 *key, u32 keylen,
        struct pkey_protkey *protkey)
 {
+ int rc = -EINVAL;
  struct keytoken_header *hdr = (struct keytoken_header *)key;
- struct protaeskeytoken *t;
 
  switch (hdr->version) {
- case TOKVER_PROTECTED_KEY:
- if (keylen != sizeof(struct protaeskeytoken))
- return -EINVAL;
+ case TOKVER_PROTECTED_KEY: {
+ struct protaeskeytoken *t;
 
+ if (keylen != sizeof(struct protaeskeytoken))
+ goto out;
  t = (struct protaeskeytoken *)key;
  protkey->len = t->len;
  protkey->type = t->keytype;
  memcpy(protkey->protkey, t->protkey,
        sizeof(protkey->protkey));
+ rc = pkey_verifyprotkey(protkey);
+ break;
+ }
+ case TOKVER_CLEAR_KEY: {
+ struct clearaeskeytoken *t;
+ struct pkey_clrkey ckey;
+ struct pkey_seckey skey;
 
- return pkey_verifyprotkey(protkey);
+ if (keylen < sizeof(struct clearaeskeytoken))
+ goto out;
+ t = (struct clearaeskeytoken *)key;
+ if (keylen != sizeof(*t) + t->len)
+ goto out;
+ if ((t->keytype == PKEY_KEYTYPE_AES_128 && t->len == 16)
+    || (t->keytype == PKEY_KEYTYPE_AES_192 && t->len == 24)
+    || (t->keytype == PKEY_KEYTYPE_AES_256 && t->len == 32))
+ memcpy(ckey.clrkey, t->clearkey, t->len);
+ else
+ goto out;
+ /* try direct way with the PCKMO instruction */
+ rc = pkey_clr2protkey(t->keytype, &ckey, protkey);
+ if (rc == 0)
+ break;
+ /* PCKMO failed, so try the CCA secure key way */
+ rc = cca_clr2seckey(0xFFFF, 0xFFFF, t->keytype,
+    ckey.clrkey, skey.seckey);
+ if (rc == 0)
+ rc = pkey_skey2pkey(skey.seckey, protkey);
+ /* now we should really have an protected key */
+ if (rc == 0)
+ break;
+ DEBUG_ERR("%s unable to build protected key from clear",
+  __func__);
+ break;
+ }
  default:
  DEBUG_ERR("%s unknown/unsupported non-CCA token version %d\n",
   __func__, hdr->version);
- return -EINVAL;
+ rc = -EINVAL;
  }
+
+out:
+ return rc;
 }
 
 /*
diff --git a/drivers/s390/crypto/zcrypt_ccamisc.h b/drivers/s390/crypto/zcrypt_ccamisc.h
index 77b6cc7b8f82..3a9876d5ab0e 100644
--- a/drivers/s390/crypto/zcrypt_ccamisc.h
+++ b/drivers/s390/crypto/zcrypt_ccamisc.h
@@ -19,6 +19,7 @@
 
 /* For TOKTYPE_NON_CCA: */
 #define TOKVER_PROTECTED_KEY 0x01 /* Protected key token */
+#define TOKVER_CLEAR_KEY 0x02 /* Clear key token */
 
 /* For TOKTYPE_CCA_INTERNAL: */
 #define TOKVER_CCA_AES 0x04 /* CCA AES key token */
--
2.25.0


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[F][PATCH 3/4] (upstream) s390/crypto: Rework on paes implementation

frank.heimes
In reply to this post by frank.heimes
From: Harald Freudenberger <[hidden email]>

BugLink: https://bugs.launchpad.net/bugs/1854948

A very minor finding within paes ctr where when the cpacf instruction
returns with only partially data en/decrytped the walk_done() was
mistakenly done with the all data counter.  Please note this can only
happen when the kmctr returns because the protected key became invalid
in the middle of the operation. And this is only with suspend and
resume on a system with different effective wrapping key.

Eric Biggers mentioned that the context struct within the tfm struct
may be shared among multiple kernel threads. So here now a rework
which uses a spinlock per context to protect the read and write of the
protected key blob value. The en/decrypt functions copy the protected
key(s) at the beginning into a param struct and do not work with the
protected key within the context any more. If the protected key in the
param struct becomes invalid, the key material is again converted to
protected key(s) and the context gets this update protected by the
spinlock. Race conditions are still possible and may result in writing
the very same protected key value more than once. So the spinlock
needs to make sure the protected key(s) within the context are
consistent updated.

The ctr page is now locked by a mutex instead of a spinlock. A similar
patch went into the aes_s390 code as a result of a complain "sleeping
function called from invalid context at ...algapi.h". See
commit 1c2c7029c008 ("s390/crypto: fix possible sleep during spinlock
aquired")' for more.

During testing with instrumented code another issue with the xts
en/decrypt function revealed. The retry cleared the running iv value
and thus let to wrong en/decrypted data.

Tested and verified with additional testcases via AF_ALG interface and
additional selftests within the kernel (which will be made available
as soon as possible).

Reported-by: Eric Biggers <[hidden email]>
Signed-off-by: Harald Freudenberger <[hidden email]>
Signed-off-by: Vasily Gorbik <[hidden email]>
OriginalAuthor: Harald Freudenberger <[hidden email]>
(backported from commit 6f3196b74d64fe4b0a51cefa6f2f80f7f55bcf49)
Signed-off-by: Frank Heimes <[hidden email]>
---
 arch/s390/crypto/paes_s390.c | 170 ++++++++++++++++++++++++++---------
 1 file changed, 126 insertions(+), 44 deletions(-)

diff --git a/arch/s390/crypto/paes_s390.c b/arch/s390/crypto/paes_s390.c
index 6184dceed340..151347c26995 100644
--- a/arch/s390/crypto/paes_s390.c
+++ b/arch/s390/crypto/paes_s390.c
@@ -5,7 +5,7 @@
  * s390 implementation of the AES Cipher Algorithm with protected keys.
  *
  * s390 Version:
- *   Copyright IBM Corp. 2017,2019
+ *   Copyright IBM Corp. 2017,2020
  *   Author(s): Martin Schwidefsky <[hidden email]>
  * Harald Freudenberger <[hidden email]>
  */
@@ -20,6 +20,7 @@
 #include <linux/module.h>
 #include <linux/cpufeature.h>
 #include <linux/init.h>
+#include <linux/mutex.h>
 #include <linux/spinlock.h>
 #include <crypto/xts.h>
 #include <asm/cpacf.h>
@@ -35,7 +36,7 @@
 #define PAES_MAX_KEYSIZE 256
 
 static u8 *ctrblk;
-static DEFINE_SPINLOCK(ctrblk_lock);
+static DEFINE_MUTEX(ctrblk_lock);
 
 static cpacf_mask_t km_functions, kmc_functions, kmctr_functions;
 
@@ -81,16 +82,18 @@ static inline void _free_kb_keybuf(struct key_blob *kb)
 struct s390_paes_ctx {
  struct key_blob kb;
  struct pkey_protkey pk;
+ spinlock_t pk_lock;
  unsigned long fc;
 };
 
 struct s390_pxts_ctx {
  struct key_blob kb[2];
  struct pkey_protkey pk[2];
+ spinlock_t pk_lock;
  unsigned long fc;
 };
 
-static inline int __paes_convert_key(struct key_blob *kb,
+static inline int __paes_keyblob2pkey(struct key_blob *kb,
      struct pkey_protkey *pk)
 {
  int i, ret;
@@ -105,22 +108,18 @@ static inline int __paes_convert_key(struct key_blob *kb,
  return ret;
 }
 
-static int __paes_set_key(struct s390_paes_ctx *ctx)
+static inline int __paes_convert_key(struct s390_paes_ctx *ctx)
 {
- unsigned long fc;
+ struct pkey_protkey pkey;
 
- if (__paes_convert_key(&ctx->kb, &ctx->pk))
+ if (__paes_keyblob2pkey(&ctx->kb, &pkey))
  return -EINVAL;
 
- /* Pick the correct function code based on the protected key type */
- fc = (ctx->pk.type == PKEY_KEYTYPE_AES_128) ? CPACF_KM_PAES_128 :
- (ctx->pk.type == PKEY_KEYTYPE_AES_192) ? CPACF_KM_PAES_192 :
- (ctx->pk.type == PKEY_KEYTYPE_AES_256) ? CPACF_KM_PAES_256 : 0;
+ spin_lock_bh(&ctx->pk_lock);
+ memcpy(&ctx->pk, &pkey, sizeof(pkey));
+ spin_unlock_bh(&ctx->pk_lock);
 
- /* Check if the function code is available */
- ctx->fc = (fc && cpacf_test_func(&km_functions, fc)) ? fc : 0;
-
- return ctx->fc ? 0 : -EINVAL;
+ return 0;
 }
 
 static int ecb_paes_init(struct crypto_tfm *tfm)
@@ -128,6 +127,7 @@ static int ecb_paes_init(struct crypto_tfm *tfm)
  struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
 
  ctx->kb.key = NULL;
+ spin_lock_init(&ctx->pk_lock);
 
  return 0;
 }
@@ -139,6 +139,24 @@ static void ecb_paes_exit(struct crypto_tfm *tfm)
  _free_kb_keybuf(&ctx->kb);
 }
 
+static inline int __ecb_paes_set_key(struct s390_paes_ctx *ctx)
+{
+ unsigned long fc;
+
+ if (__paes_convert_key(ctx))
+ return -EINVAL;
+
+ /* Pick the correct function code based on the protected key type */
+ fc = (ctx->pk.type == PKEY_KEYTYPE_AES_128) ? CPACF_KM_PAES_128 :
+ (ctx->pk.type == PKEY_KEYTYPE_AES_192) ? CPACF_KM_PAES_192 :
+ (ctx->pk.type == PKEY_KEYTYPE_AES_256) ? CPACF_KM_PAES_256 : 0;
+
+ /* Check if the function code is available */
+ ctx->fc = (fc && cpacf_test_func(&km_functions, fc)) ? fc : 0;
+
+ return ctx->fc ? 0 : -EINVAL;
+}
+
 static int ecb_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
     unsigned int key_len)
 {
@@ -150,9 +168,10 @@ static int ecb_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
  if (rc)
  return rc;
 
- if (__paes_set_key(ctx)) {
+ if (__ecb_paes_set_key(ctx)) {
  tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
  return -EINVAL;
+
  }
  return 0;
 }
@@ -164,18 +183,31 @@ static int ecb_paes_crypt(struct blkcipher_desc *desc,
  struct s390_paes_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
  unsigned int nbytes, n, k;
  int ret;
+ struct {
+ u8 key[MAXPROTKEYSIZE];
+ } param;
 
  ret = blkcipher_walk_virt(desc, walk);
+ if (ret)
+ return ret;
+
+ spin_lock_bh(&ctx->pk_lock);
+ memcpy(param.key, ctx->pk.protkey, MAXPROTKEYSIZE);
+ spin_unlock_bh(&ctx->pk_lock);
+
  while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
  /* only use complete blocks */
  n = nbytes & ~(AES_BLOCK_SIZE - 1);
- k = cpacf_km(ctx->fc | modifier, ctx->pk.protkey,
+ k = cpacf_km(ctx->fc | modifier, &param,
      walk->dst.virt.addr, walk->src.virt.addr, n);
  if (k)
  ret = blkcipher_walk_done(desc, walk, nbytes - k);
  if (k < n) {
- if (__paes_set_key(ctx) != 0)
+ if (__paes_convert_key(ctx))
  return blkcipher_walk_done(desc, walk, -EIO);
+ spin_lock_bh(&ctx->pk_lock);
+ memcpy(param.key, ctx->pk.protkey, MAXPROTKEYSIZE);
+ spin_unlock_bh(&ctx->pk_lock);
  }
  }
  return ret;
@@ -229,6 +261,7 @@ static int cbc_paes_init(struct crypto_tfm *tfm)
  struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
 
  ctx->kb.key = NULL;
+ spin_lock_init(&ctx->pk_lock);
 
  return 0;
 }
@@ -240,11 +273,11 @@ static void cbc_paes_exit(struct crypto_tfm *tfm)
  _free_kb_keybuf(&ctx->kb);
 }
 
-static int __cbc_paes_set_key(struct s390_paes_ctx *ctx)
+static inline int __cbc_paes_set_key(struct s390_paes_ctx *ctx)
 {
  unsigned long fc;
 
- if (__paes_convert_key(&ctx->kb, &ctx->pk))
+ if (__paes_convert_key(ctx))
  return -EINVAL;
 
  /* Pick the correct function code based on the protected key type */
@@ -288,22 +321,31 @@ static int cbc_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
  } param;
 
  ret = blkcipher_walk_virt(desc, walk);
+ if (ret)
+ return ret;
+
  memcpy(param.iv, walk->iv, AES_BLOCK_SIZE);
+ spin_lock_bh(&ctx->pk_lock);
  memcpy(param.key, ctx->pk.protkey, MAXPROTKEYSIZE);
+ spin_unlock_bh(&ctx->pk_lock);
+
  while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
  /* only use complete blocks */
  n = nbytes & ~(AES_BLOCK_SIZE - 1);
  k = cpacf_kmc(ctx->fc | modifier, &param,
       walk->dst.virt.addr, walk->src.virt.addr, n);
- if (k)
+ if (k) {
+ memcpy(walk->iv, param.iv, AES_BLOCK_SIZE);
  ret = blkcipher_walk_done(desc, walk, nbytes - k);
+ }
  if (k < n) {
- if (__cbc_paes_set_key(ctx) != 0)
+ if (__paes_convert_key(ctx))
  return blkcipher_walk_done(desc, walk, -EIO);
+ spin_lock_bh(&ctx->pk_lock);
  memcpy(param.key, ctx->pk.protkey, MAXPROTKEYSIZE);
+ spin_unlock_bh(&ctx->pk_lock);
  }
  }
- memcpy(walk->iv, param.iv, AES_BLOCK_SIZE);
  return ret;
 }
 
@@ -357,6 +399,7 @@ static int xts_paes_init(struct crypto_tfm *tfm)
 
  ctx->kb[0].key = NULL;
  ctx->kb[1].key = NULL;
+ spin_lock_init(&ctx->pk_lock);
 
  return 0;
 }
@@ -369,12 +412,27 @@ static void xts_paes_exit(struct crypto_tfm *tfm)
  _free_kb_keybuf(&ctx->kb[1]);
 }
 
-static int __xts_paes_set_key(struct s390_pxts_ctx *ctx)
+static inline int __xts_paes_convert_key(struct s390_pxts_ctx *ctx)
+{
+ struct pkey_protkey pkey0, pkey1;
+
+ if (__paes_keyblob2pkey(&ctx->kb[0], &pkey0) ||
+    __paes_keyblob2pkey(&ctx->kb[1], &pkey1))
+ return -EINVAL;
+
+ spin_lock_bh(&ctx->pk_lock);
+ memcpy(&ctx->pk[0], &pkey0, sizeof(pkey0));
+ memcpy(&ctx->pk[1], &pkey1, sizeof(pkey1));
+ spin_unlock_bh(&ctx->pk_lock);
+
+ return 0;
+}
+
+static inline int __xts_paes_set_key(struct s390_pxts_ctx *ctx)
 {
  unsigned long fc;
 
- if (__paes_convert_key(&ctx->kb[0], &ctx->pk[0]) ||
-    __paes_convert_key(&ctx->kb[1], &ctx->pk[1]))
+ if (__xts_paes_convert_key(ctx))
  return -EINVAL;
 
  if (ctx->pk[0].type != ctx->pk[1].type)
@@ -449,15 +507,19 @@ static int xts_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
  } xts_param;
 
  ret = blkcipher_walk_virt(desc, walk);
+ if (ret)
+ return ret;
+
  keylen = (ctx->pk[0].type == PKEY_KEYTYPE_AES_128) ? 48 : 64;
  offset = (ctx->pk[0].type == PKEY_KEYTYPE_AES_128) ? 16 : 0;
-retry:
+
  memset(&pcc_param, 0, sizeof(pcc_param));
  memcpy(pcc_param.tweak, walk->iv, sizeof(pcc_param.tweak));
+ spin_lock_bh(&ctx->pk_lock);
  memcpy(pcc_param.key + offset, ctx->pk[1].protkey, keylen);
- cpacf_pcc(ctx->fc, pcc_param.key + offset);
-
  memcpy(xts_param.key + offset, ctx->pk[0].protkey, keylen);
+ spin_unlock_bh(&ctx->pk_lock);
+ cpacf_pcc(ctx->fc, pcc_param.key + offset);
  memcpy(xts_param.init, pcc_param.xts, 16);
 
  while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
@@ -468,11 +530,15 @@ static int xts_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
  if (k)
  ret = blkcipher_walk_done(desc, walk, nbytes - k);
  if (k < n) {
- if (__xts_paes_set_key(ctx) != 0)
+ if (__xts_paes_convert_key(ctx))
  return blkcipher_walk_done(desc, walk, -EIO);
- goto retry;
+ spin_lock_bh(&ctx->pk_lock);
+ memcpy(xts_param.key + offset,
+       ctx->pk[0].protkey, keylen);
+ spin_unlock_bh(&ctx->pk_lock);
  }
  }
+
  return ret;
 }
 
@@ -525,6 +591,7 @@ static int ctr_paes_init(struct crypto_tfm *tfm)
  struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
 
  ctx->kb.key = NULL;
+ spin_lock_init(&ctx->pk_lock);
 
  return 0;
 }
@@ -536,11 +603,11 @@ static void ctr_paes_exit(struct crypto_tfm *tfm)
  _free_kb_keybuf(&ctx->kb);
 }
 
-static int __ctr_paes_set_key(struct s390_paes_ctx *ctx)
+static inline int __ctr_paes_set_key(struct s390_paes_ctx *ctx)
 {
  unsigned long fc;
 
- if (__paes_convert_key(&ctx->kb, &ctx->pk))
+ if (__paes_convert_key(ctx))
  return -EINVAL;
 
  /* Pick the correct function code based on the protected key type */
@@ -595,47 +662,62 @@ static int ctr_paes_crypt(struct blkcipher_desc *desc, unsigned long modifier,
  u8 buf[AES_BLOCK_SIZE], *ctrptr;
  unsigned int nbytes, n, k;
  int ret, locked;
-
- locked = spin_trylock(&ctrblk_lock);
+ struct {
+ u8 key[MAXPROTKEYSIZE];
+ } param;
 
  ret = blkcipher_walk_virt_block(desc, walk, AES_BLOCK_SIZE);
+ if (ret)
+ return ret;
+
+ spin_lock_bh(&ctx->pk_lock);
+ memcpy(param.key, ctx->pk.protkey, MAXPROTKEYSIZE);
+ spin_unlock_bh(&ctx->pk_lock);
+
+ locked = mutex_trylock(&ctrblk_lock);
+
+
  while ((nbytes = walk->nbytes) >= AES_BLOCK_SIZE) {
  n = AES_BLOCK_SIZE;
  if (nbytes >= 2*AES_BLOCK_SIZE && locked)
  n = __ctrblk_init(ctrblk, walk->iv, nbytes);
  ctrptr = (n > AES_BLOCK_SIZE) ? ctrblk : walk->iv;
- k = cpacf_kmctr(ctx->fc | modifier, ctx->pk.protkey,
- walk->dst.virt.addr, walk->src.virt.addr,
- n, ctrptr);
+ k = cpacf_kmctr(ctx->fc | modifier, &param, walk->dst.virt.addr,
+ walk->src.virt.addr, n, ctrptr);
  if (k) {
  if (ctrptr == ctrblk)
  memcpy(walk->iv, ctrptr + k - AES_BLOCK_SIZE,
        AES_BLOCK_SIZE);
  crypto_inc(walk->iv, AES_BLOCK_SIZE);
- ret = blkcipher_walk_done(desc, walk, nbytes - n);
+ ret = blkcipher_walk_done(desc, walk, nbytes - k);
  }
  if (k < n) {
- if (__ctr_paes_set_key(ctx) != 0) {
+ if (__paes_convert_key(ctx)) {
  if (locked)
- spin_unlock(&ctrblk_lock);
+ mutex_unlock(&ctrblk_lock);
  return blkcipher_walk_done(desc, walk, -EIO);
  }
+ spin_lock_bh(&ctx->pk_lock);
+ memcpy(param.key, ctx->pk.protkey, MAXPROTKEYSIZE);
+ spin_unlock_bh(&ctx->pk_lock);
  }
  }
  if (locked)
- spin_unlock(&ctrblk_lock);
+ mutex_unlock(&ctrblk_lock);
  /*
  * final block may be < AES_BLOCK_SIZE, copy only nbytes
  */
  if (nbytes) {
  while (1) {
- if (cpacf_kmctr(ctx->fc | modifier,
- ctx->pk.protkey, buf,
+ if (cpacf_kmctr(ctx->fc | modifier, &param, buf,
  walk->src.virt.addr, AES_BLOCK_SIZE,
  walk->iv) == AES_BLOCK_SIZE)
  break;
- if (__ctr_paes_set_key(ctx) != 0)
+ if (__paes_convert_key(ctx))
  return blkcipher_walk_done(desc, walk, -EIO);
+ spin_lock_bh(&ctx->pk_lock);
+ memcpy(param.key, ctx->pk.protkey, MAXPROTKEYSIZE);
+ spin_unlock_bh(&ctx->pk_lock);
  }
  memcpy(walk->dst.virt.addr, buf, nbytes);
  crypto_inc(walk->iv, AES_BLOCK_SIZE);
--
2.25.0


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[F][PATCH 4/4] (upstream) s390/crypto: enable clear key values for paes ciphers

frank.heimes
In reply to this post by frank.heimes
From: Harald Freudenberger <[hidden email]>

BugLink: https://bugs.launchpad.net/bugs/1854948

With this patch the paes ciphers do accept AES clear key values of
size 16, 24 or 32 byte. The key value is internal rearranged to form a
paes clear key token so that the pkey kernel module recognizes and
handles this key material as source for protected keys.

Using clear key material as a source for protected keys is a security
risc as the raw key material is kept in memory. However, so the AES
selftests provided with the testmanager can be run during registration
of the paes ciphers.

Signed-off-by: Harald Freudenberger <[hidden email]>
Signed-off-by: Vasily Gorbik <[hidden email]>
OriginalAuthor: Harald Freudenberger <[hidden email]>
(backported from commit 7f820d053948ca82bd8221b1df3d676b9c93a494)
Signed-off-by: Frank Heimes <[hidden email]>
---
 arch/s390/crypto/paes_s390.c | 71 +++++++++++++++++++++++++-----------
 1 file changed, 49 insertions(+), 22 deletions(-)

diff --git a/arch/s390/crypto/paes_s390.c b/arch/s390/crypto/paes_s390.c
index 151347c26995..4662c4dcef90 100644
--- a/arch/s390/crypto/paes_s390.c
+++ b/arch/s390/crypto/paes_s390.c
@@ -32,7 +32,7 @@
  * is called. As paes can handle different kinds of key blobs
  * and padding is also possible, the limits need to be generous.
  */
-#define PAES_MIN_KEYSIZE 64
+#define PAES_MIN_KEYSIZE 16
 #define PAES_MAX_KEYSIZE 256
 
 static u8 *ctrblk;
@@ -53,19 +53,46 @@ struct key_blob {
  unsigned int keylen;
 };
 
-static inline int _copy_key_to_kb(struct key_blob *kb,
-  const u8 *key,
-  unsigned int keylen)
-{
- if (keylen <= sizeof(kb->keybuf))
+static inline int _key_to_kb(struct key_blob *kb,
+     const u8 *key,
+     unsigned int keylen)
+{
+ struct clearkey_header {
+ u8  type;
+ u8  res0[3];
+ u8  version;
+ u8  res1[3];
+ u32 keytype;
+ u32 len;
+ } __packed * h;
+
+ switch (keylen) {
+ case 16:
+ case 24:
+ case 32:
+ /* clear key value, prepare pkey clear key token in keybuf */
+ memset(kb->keybuf, 0, sizeof(kb->keybuf));
+ h = (struct clearkey_header *) kb->keybuf;
+ h->version = 0x02; /* TOKVER_CLEAR_KEY */
+ h->keytype = (keylen - 8) >> 3;
+ h->len = keylen;
+ memcpy(kb->keybuf + sizeof(*h), key, keylen);
+ kb->keylen = sizeof(*h) + keylen;
  kb->key = kb->keybuf;
- else {
- kb->key = kmalloc(keylen, GFP_KERNEL);
- if (!kb->key)
- return -ENOMEM;
+ break;
+ default:
+ /* other key material, let pkey handle this */
+ if (keylen <= sizeof(kb->keybuf))
+ kb->key = kb->keybuf;
+ else {
+ kb->key = kmalloc(keylen, GFP_KERNEL);
+ if (!kb->key)
+ return -ENOMEM;
+ }
+ memcpy(kb->key, key, keylen);
+ kb->keylen = keylen;
+ break;
  }
- memcpy(kb->key, key, keylen);
- kb->keylen = keylen;
 
  return 0;
 }
@@ -164,7 +191,7 @@ static int ecb_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
  struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
 
  _free_kb_keybuf(&ctx->kb);
- rc = _copy_key_to_kb(&ctx->kb, in_key, key_len);
+ rc = _key_to_kb(&ctx->kb, in_key, key_len);
  if (rc)
  return rc;
 
@@ -298,7 +325,7 @@ static int cbc_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
  struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
 
  _free_kb_keybuf(&ctx->kb);
- rc = _copy_key_to_kb(&ctx->kb, in_key, key_len);
+ rc = _key_to_kb(&ctx->kb, in_key, key_len);
  if (rc)
  return rc;
 
@@ -464,10 +491,10 @@ static int xts_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
 
  _free_kb_keybuf(&ctx->kb[0]);
  _free_kb_keybuf(&ctx->kb[1]);
- rc = _copy_key_to_kb(&ctx->kb[0], in_key, key_len);
+ rc = _key_to_kb(&ctx->kb[0], in_key, key_len);
  if (rc)
  return rc;
- rc = _copy_key_to_kb(&ctx->kb[1], in_key + key_len, key_len);
+ rc = _key_to_kb(&ctx->kb[1], in_key + key_len, key_len);
  if (rc)
  return rc;
 
@@ -629,7 +656,7 @@ static int ctr_paes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
  struct s390_paes_ctx *ctx = crypto_tfm_ctx(tfm);
 
  _free_kb_keybuf(&ctx->kb);
- rc = _copy_key_to_kb(&ctx->kb, in_key, key_len);
+ rc = _key_to_kb(&ctx->kb, in_key, key_len);
  if (rc)
  return rc;
 
@@ -779,12 +806,12 @@ static inline void __crypto_unregister_alg(struct crypto_alg *alg)
 
 static void paes_s390_fini(void)
 {
- if (ctrblk)
- free_page((unsigned long) ctrblk);
  __crypto_unregister_alg(&ctr_paes_alg);
  __crypto_unregister_alg(&xts_paes_alg);
  __crypto_unregister_alg(&cbc_paes_alg);
  __crypto_unregister_alg(&ecb_paes_alg);
+ if (ctrblk)
+ free_page((unsigned long) ctrblk);
 }
 
 static int __init paes_s390_init(void)
@@ -822,14 +849,14 @@ static int __init paes_s390_init(void)
  if (cpacf_test_func(&kmctr_functions, CPACF_KMCTR_PAES_128) ||
     cpacf_test_func(&kmctr_functions, CPACF_KMCTR_PAES_192) ||
     cpacf_test_func(&kmctr_functions, CPACF_KMCTR_PAES_256)) {
- ret = crypto_register_alg(&ctr_paes_alg);
- if (ret)
- goto out_err;
  ctrblk = (u8 *) __get_free_page(GFP_KERNEL);
  if (!ctrblk) {
  ret = -ENOMEM;
  goto out_err;
  }
+ ret = crypto_register_alg(&ctr_paes_alg);
+ if (ret)
+ goto out_err;
  }
 
  return 0;
--
2.25.0


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

NACK [F][PATCH 0/4] paes self test (LP: 1854948)

frank.heimes
In reply to this post by frank.heimes
This version of the "paes self test" patch submission misses one patch/commit, which was known.
At the point in time I worked on this patch submission, the missing patch was left out by intention, since it was not upstream yet - and it was expected that it might still take some time to get it upstream accepted.

But the missing patch landed surprisingly early in (torvalds) upstream git tree after Herbert Xu acked it:
c7ff8573ad21 "crypto/testmgr: enable selftests for paes-s390 ciphers"
--> next-20200214
--> next-20200217
And since this patch submission and request was not yet processed, I'm hereby revoking it and will sent a "v2" that incl. the missing one.

Sorry for any inconvenience ...

Bye, Frank

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team