[SRU][Bionic][PATCH 0/1] s390/dasd: reduce the default queue depth and nr of hardware queues (LP: 1852257)

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

[SRU][Bionic][PATCH 0/1] s390/dasd: reduce the default queue depth and nr of hardware queues (LP: 1852257)

frank.heimes
Buglink: https://bugs.launchpad.net/bugs/1852257

SRU Justification:

[Impact]

* On s390x systems with a small memory footprint, but large amounts of DASD disks,

* the memory can get depleted (even during installation) and can eventually lead to a situation where the OOM kicks in (followed by even more problems).

* Starting with kernel 4.18 the patch below leads to 90% memory consumption savings per active DASD device.

* The below backport is needed to fix this and get the improvement into bionic's kernel 4.15.

[Fix]

* 3284da34a87ab7a527a593f89bbdaf6debe9e713 3284da3 "s390/dasd: reduce the default queue depth and nr of hardware queues"

* Backport: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1852257/+attachment/5304830/+files/0001-s390-dasd-reduce-the-default-queue-depth-and-nr-of-h.patch

[Test Case]

* Configure a s390x system (z/VM guest or LPAR) with only a bit RAM, but lot's of DASDs devices.

* Now gradually enable more and more DASDs and monitor the memory usage (be sure to exclude shared memory and cache).

* One can notice a difference in mem usage of about 10:1 per activated DASD comparing the current stock 4.15 kernel with a patches kernel 4.15.

* With about less than 1GB memory and 40+ DASD devices one may start to run into an OOM situation w(o the patch.

[Regression Potential]

* The regression potential can be considered as moderate, since:

* this is purely s390x specific

* it again only affects DASD disk storage (no zFCP/SCSI disk storage)

* and it's again only limited to smaller systems (so more z/VM guests rather than LPARs).

[Other Info]

* A cherry-pick to 4.15 wasn't clean (problem in one line), hence the backport, which applied, compiled and worked fine.

Stefan Haberland (1):
  s390/dasd: reduce the default queue depth and nr of hardware queues

 drivers/s390/block/dasd.c     | 13 +++++++++++--
 drivers/s390/block/dasd_int.h |  8 --------
 2 files changed, 11 insertions(+), 10 deletions(-)

--
2.7.4


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[SRU][Bionic][PATCH 1/1] s390/dasd: reduce the default queue depth and nr of hardware queues

frank.heimes
From: Stefan Haberland <[hidden email]>

BugLink: https://bugs.launchpad.net/bugs/1852257

Reduce the default values for the number of hardware queues and queue depth
to significantly reduce the memory footprint of a DASD device.
The memory consumption per DASD device reduces from approximately 40MB to
approximately 1.5MB.

This is necessary to build systems with a large number of DASD devices and
a reasonable amount of memory.
Performance measurements showed that good performance results are possible
with the new default values even on systems with lots of CPUs and lots of
alias devices.

Fixes: e443343e509a ("s390/dasd: blk-mq conversion")
Reviewed-by: Jan Hoeppner <[hidden email]>
Reviewed-by: Peter Oberparleiter <[hidden email]>
Signed-off-by: Stefan Haberland <[hidden email]>
Signed-off-by: Martin Schwidefsky <[hidden email]>
(backported from commit 3284da34a87ab7a527a593f89bbdaf6debe9e713)
Signed-off-by: Frank Heimes <[hidden email]>
---
 drivers/s390/block/dasd.c     | 13 +++++++++++--
 drivers/s390/block/dasd_int.h |  8 --------
 2 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
index 0c4b730..d61f17f 100644
--- a/drivers/s390/block/dasd.c
+++ b/drivers/s390/block/dasd.c
@@ -41,6 +41,15 @@
 
 #define DASD_DIAG_MOD "dasd_diag_mod"
 
+static unsigned int queue_depth = 32;
+static unsigned int nr_hw_queues = 4;
+
+module_param(queue_depth, uint, 0444);
+MODULE_PARM_DESC(queue_depth, "Default queue depth for new DASD devices");
+
+module_param(nr_hw_queues, uint, 0444);
+MODULE_PARM_DESC(nr_hw_queues, "Default number of hardware queues for new DASD devices");
+
 /*
  * SECTION: exported variables of dasd.c
  */
@@ -3184,8 +3193,8 @@ static int dasd_alloc_queue(struct dasd_block *block)
 
  block->tag_set.ops = &dasd_mq_ops;
  block->tag_set.cmd_size = sizeof(struct dasd_ccw_req *);
- block->tag_set.nr_hw_queues = DASD_NR_HW_QUEUES;
- block->tag_set.queue_depth = DASD_MAX_LCU_DEV * DASD_REQ_PER_DEV;
+ block->tag_set.nr_hw_queues = nr_hw_queues;
+ block->tag_set.queue_depth = queue_depth;
  block->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
  block->tag_set.numa_node = NUMA_NO_NODE;
 
diff --git a/drivers/s390/block/dasd_int.h b/drivers/s390/block/dasd_int.h
index 96709b1..d518a1e 100644
--- a/drivers/s390/block/dasd_int.h
+++ b/drivers/s390/block/dasd_int.h
@@ -235,14 +235,6 @@ struct dasd_ccw_req {
 #define DASD_CQR_SUPPRESS_IL 6 /* Suppress 'Incorrect Length' error */
 #define DASD_CQR_SUPPRESS_CR 7 /* Suppress 'Command Reject' error */
 
-/*
- * There is no reliable way to determine the number of available CPUs on
- * LPAR but there is no big performance difference between 1 and the
- * maximum CPU number.
- * 64 is a good trade off performance wise.
- */
-#define DASD_NR_HW_QUEUES 64
-#define DASD_MAX_LCU_DEV 256
 #define DASD_REQ_PER_DEV 4
 
 /* Signature for error recovery functions. */
--
2.7.4


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

ACK: [SRU][Bionic][PATCH 1/1] s390/dasd: reduce the default queue depth and nr of hardware queues

Stefan Bader-2
On 14.11.19 12:46, [hidden email] wrote:

> From: Stefan Haberland <[hidden email]>
>
> BugLink: https://bugs.launchpad.net/bugs/1852257
>
> Reduce the default values for the number of hardware queues and queue depth
> to significantly reduce the memory footprint of a DASD device.
> The memory consumption per DASD device reduces from approximately 40MB to
> approximately 1.5MB.
>
> This is necessary to build systems with a large number of DASD devices and
> a reasonable amount of memory.
> Performance measurements showed that good performance results are possible
> with the new default values even on systems with lots of CPUs and lots of
> alias devices.
>
> Fixes: e443343e509a ("s390/dasd: blk-mq conversion")
> Reviewed-by: Jan Hoeppner <[hidden email]>
> Reviewed-by: Peter Oberparleiter <[hidden email]>
> Signed-off-by: Stefan Haberland <[hidden email]>
> Signed-off-by: Martin Schwidefsky <[hidden email]>
> (backported from commit 3284da34a87ab7a527a593f89bbdaf6debe9e713)
> Signed-off-by: Frank Heimes <[hidden email]>
Acked-by: Stefan Bader <[hidden email]>

> ---
>  drivers/s390/block/dasd.c     | 13 +++++++++++--
>  drivers/s390/block/dasd_int.h |  8 --------
>  2 files changed, 11 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
> index 0c4b730..d61f17f 100644
> --- a/drivers/s390/block/dasd.c
> +++ b/drivers/s390/block/dasd.c
> @@ -41,6 +41,15 @@
>  
>  #define DASD_DIAG_MOD "dasd_diag_mod"
>  
> +static unsigned int queue_depth = 32;
> +static unsigned int nr_hw_queues = 4;
> +
> +module_param(queue_depth, uint, 0444);
> +MODULE_PARM_DESC(queue_depth, "Default queue depth for new DASD devices");
> +
> +module_param(nr_hw_queues, uint, 0444);
> +MODULE_PARM_DESC(nr_hw_queues, "Default number of hardware queues for new DASD devices");
> +
>  /*
>   * SECTION: exported variables of dasd.c
>   */
> @@ -3184,8 +3193,8 @@ static int dasd_alloc_queue(struct dasd_block *block)
>  
>   block->tag_set.ops = &dasd_mq_ops;
>   block->tag_set.cmd_size = sizeof(struct dasd_ccw_req *);
> - block->tag_set.nr_hw_queues = DASD_NR_HW_QUEUES;
> - block->tag_set.queue_depth = DASD_MAX_LCU_DEV * DASD_REQ_PER_DEV;
> + block->tag_set.nr_hw_queues = nr_hw_queues;
> + block->tag_set.queue_depth = queue_depth;
>   block->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
>   block->tag_set.numa_node = NUMA_NO_NODE;
>  
> diff --git a/drivers/s390/block/dasd_int.h b/drivers/s390/block/dasd_int.h
> index 96709b1..d518a1e 100644
> --- a/drivers/s390/block/dasd_int.h
> +++ b/drivers/s390/block/dasd_int.h
> @@ -235,14 +235,6 @@ struct dasd_ccw_req {
>  #define DASD_CQR_SUPPRESS_IL 6 /* Suppress 'Incorrect Length' error */
>  #define DASD_CQR_SUPPRESS_CR 7 /* Suppress 'Command Reject' error */
>  
> -/*
> - * There is no reliable way to determine the number of available CPUs on
> - * LPAR but there is no big performance difference between 1 and the
> - * maximum CPU number.
> - * 64 is a good trade off performance wise.
> - */
> -#define DASD_NR_HW_QUEUES 64
> -#define DASD_MAX_LCU_DEV 256
>  #define DASD_REQ_PER_DEV 4
>  
>  /* Signature for error recovery functions. */
>


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

ACK: [SRU][Bionic][PATCH 1/1] s390/dasd: reduce the default queue depth and nr of hardware queues

Kleber Souza
In reply to this post by frank.heimes
On 14.11.19 12:46, [hidden email] wrote:

> From: Stefan Haberland <[hidden email]>
>
> BugLink: https://bugs.launchpad.net/bugs/1852257
>
> Reduce the default values for the number of hardware queues and queue depth
> to significantly reduce the memory footprint of a DASD device.
> The memory consumption per DASD device reduces from approximately 40MB to
> approximately 1.5MB.
>
> This is necessary to build systems with a large number of DASD devices and
> a reasonable amount of memory.
> Performance measurements showed that good performance results are possible
> with the new default values even on systems with lots of CPUs and lots of
> alias devices.
>
> Fixes: e443343e509a ("s390/dasd: blk-mq conversion")
> Reviewed-by: Jan Hoeppner <[hidden email]>
> Reviewed-by: Peter Oberparleiter <[hidden email]>
> Signed-off-by: Stefan Haberland <[hidden email]>
> Signed-off-by: Martin Schwidefsky <[hidden email]>
> (backported from commit 3284da34a87ab7a527a593f89bbdaf6debe9e713)
> Signed-off-by: Frank Heimes <[hidden email]>

Acked-by: Kleber Sacilotto de Souza <[hidden email]>

> ---
>  drivers/s390/block/dasd.c     | 13 +++++++++++--
>  drivers/s390/block/dasd_int.h |  8 --------
>  2 files changed, 11 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/s390/block/dasd.c b/drivers/s390/block/dasd.c
> index 0c4b730..d61f17f 100644
> --- a/drivers/s390/block/dasd.c
> +++ b/drivers/s390/block/dasd.c
> @@ -41,6 +41,15 @@
>  
>  #define DASD_DIAG_MOD "dasd_diag_mod"
>  
> +static unsigned int queue_depth = 32;
> +static unsigned int nr_hw_queues = 4;
> +
> +module_param(queue_depth, uint, 0444);
> +MODULE_PARM_DESC(queue_depth, "Default queue depth for new DASD devices");
> +
> +module_param(nr_hw_queues, uint, 0444);
> +MODULE_PARM_DESC(nr_hw_queues, "Default number of hardware queues for new DASD devices");
> +
>  /*
>   * SECTION: exported variables of dasd.c
>   */
> @@ -3184,8 +3193,8 @@ static int dasd_alloc_queue(struct dasd_block *block)
>  
>   block->tag_set.ops = &dasd_mq_ops;
>   block->tag_set.cmd_size = sizeof(struct dasd_ccw_req *);
> - block->tag_set.nr_hw_queues = DASD_NR_HW_QUEUES;
> - block->tag_set.queue_depth = DASD_MAX_LCU_DEV * DASD_REQ_PER_DEV;
> + block->tag_set.nr_hw_queues = nr_hw_queues;
> + block->tag_set.queue_depth = queue_depth;
>   block->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
>   block->tag_set.numa_node = NUMA_NO_NODE;
>  
> diff --git a/drivers/s390/block/dasd_int.h b/drivers/s390/block/dasd_int.h
> index 96709b1..d518a1e 100644
> --- a/drivers/s390/block/dasd_int.h
> +++ b/drivers/s390/block/dasd_int.h
> @@ -235,14 +235,6 @@ struct dasd_ccw_req {
>  #define DASD_CQR_SUPPRESS_IL 6 /* Suppress 'Incorrect Length' error */
>  #define DASD_CQR_SUPPRESS_CR 7 /* Suppress 'Command Reject' error */
>  
> -/*
> - * There is no reliable way to determine the number of available CPUs on
> - * LPAR but there is no big performance difference between 1 and the
> - * maximum CPU number.
> - * 64 is a good trade off performance wise.
> - */
> -#define DASD_NR_HW_QUEUES 64
> -#define DASD_MAX_LCU_DEV 256
>  #define DASD_REQ_PER_DEV 4
>  
>  /* Signature for error recovery functions. */
>


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

APPLIED: [SRU][Bionic][PATCH 0/1] s390/dasd: reduce the default queue depth and nr of hardware queues (LP: 1852257)

Khaled Elmously
In reply to this post by frank.heimes
On 2019-11-14 12:46:21 , [hidden email] wrote:

> Buglink: https://bugs.launchpad.net/bugs/1852257
>
> SRU Justification:
>
> [Impact]
>
> * On s390x systems with a small memory footprint, but large amounts of DASD disks,
>
> * the memory can get depleted (even during installation) and can eventually lead to a situation where the OOM kicks in (followed by even more problems).
>
> * Starting with kernel 4.18 the patch below leads to 90% memory consumption savings per active DASD device.
>
> * The below backport is needed to fix this and get the improvement into bionic's kernel 4.15.
>
> [Fix]
>
> * 3284da34a87ab7a527a593f89bbdaf6debe9e713 3284da3 "s390/dasd: reduce the default queue depth and nr of hardware queues"
>
> * Backport: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1852257/+attachment/5304830/+files/0001-s390-dasd-reduce-the-default-queue-depth-and-nr-of-h.patch
>
> [Test Case]
>
> * Configure a s390x system (z/VM guest or LPAR) with only a bit RAM, but lot's of DASDs devices.
>
> * Now gradually enable more and more DASDs and monitor the memory usage (be sure to exclude shared memory and cache).
>
> * One can notice a difference in mem usage of about 10:1 per activated DASD comparing the current stock 4.15 kernel with a patches kernel 4.15.
>
> * With about less than 1GB memory and 40+ DASD devices one may start to run into an OOM situation w(o the patch.
>
> [Regression Potential]
>
> * The regression potential can be considered as moderate, since:
>
> * this is purely s390x specific
>
> * it again only affects DASD disk storage (no zFCP/SCSI disk storage)
>
> * and it's again only limited to smaller systems (so more z/VM guests rather than LPARs).
>
> [Other Info]
>
> * A cherry-pick to 4.15 wasn't clean (problem in one line), hence the backport, which applied, compiled and worked fine.
>
> Stefan Haberland (1):
>   s390/dasd: reduce the default queue depth and nr of hardware queues
>
>  drivers/s390/block/dasd.c     | 13 +++++++++++--
>  drivers/s390/block/dasd_int.h |  8 --------
>  2 files changed, 11 insertions(+), 10 deletions(-)
>
> --
> 2.7.4
>
>
> --
> kernel-team mailing list
> [hidden email]
> https://lists.ubuntu.com/mailman/listinfo/kernel-team

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team