[Bionic][PATCH 0/3] BFQ fixes

classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

[Bionic][PATCH 0/3] BFQ fixes

Wen-chien Jesse Sung
BugLink: https://launchpad.net/bugs/1780066

Disk IO will hang some seconds after boot when using BFQ as io
scheduler. This was also reported in some public MLs [1][2].
The fix is already in mainline, but it's not available in 4.15.

[1] https://lkml.org/lkml/2017/12/1/80
[2] https://groups.google.com/forum/#!topic/bfq-iosched/nB2cVfDMSOU

Chiara Bruschi (1):
  block, bfq: fix occurrences of request finish method's old name

Paolo Valente (2):
  block, bfq: remove batches of confusing ifdefs
  block, bfq: add requeue-request hook

 block/bfq-iosched.c | 226 +++++++++++++++++++++++++++++---------------
 1 file changed, 150 insertions(+), 76 deletions(-)

--
2.17.1


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[Bionic][PATCH 1/3] block, bfq: fix occurrences of request finish method's old name

Wen-chien Jesse Sung
From: Chiara Bruschi <[hidden email]>

BugLink: https://launchpad.net/bugs/1780066

Commit '7b9e93616399' ("blk-mq-sched: unify request finished methods")
changed the old name of current bfq_finish_request method, but left it
unchanged elsewhere in the code (related comments, part of function
name bfq_put_rq_priv_body).

This commit fixes all occurrences of the old name of this method by
changing them into the current name.

Fixes: 7b9e93616399 ("blk-mq-sched: unify request finished methods")
Reviewed-by: Paolo Valente <[hidden email]>
Signed-off-by: Federico Motta <[hidden email]>
Signed-off-by: Chiara Bruschi <[hidden email]>
Signed-off-by: Jens Axboe <[hidden email]>
(cherry picked from commit 8993d445df388e3541f48920a2353cfc904b220a)
Signed-off-by: Wen-chien Jesse Sung <[hidden email]>
---
 block/bfq-iosched.c | 26 +++++++++++++-------------
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index bcb6d21baf12..6da7f7131b95 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -3630,8 +3630,8 @@ static struct request *__bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
  }
 
  /*
- * We exploit the put_rq_private hook to decrement
- * rq_in_driver, but put_rq_private will not be
+ * We exploit the bfq_finish_request hook to decrement
+ * rq_in_driver, but bfq_finish_request will not be
  * invoked on this request. So, to avoid unbalance,
  * just start this request, without incrementing
  * rq_in_driver. As a negative consequence,
@@ -3640,14 +3640,14 @@ static struct request *__bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
  * bfq_schedule_dispatch to be invoked uselessly.
  *
  * As for implementing an exact solution, the
- * put_request hook, if defined, is probably invoked
- * also on this request. So, by exploiting this hook,
- * we could 1) increment rq_in_driver here, and 2)
- * decrement it in put_request. Such a solution would
- * let the value of the counter be always accurate,
- * but it would entail using an extra interface
- * function. This cost seems higher than the benefit,
- * being the frequency of non-elevator-private
+ * bfq_finish_request hook, if defined, is probably
+ * invoked also on this request. So, by exploiting
+ * this hook, we could 1) increment rq_in_driver here,
+ * and 2) decrement it in bfq_finish_request. Such a
+ * solution would let the value of the counter be
+ * always accurate, but it would entail using an extra
+ * interface function. This cost seems higher than the
+ * benefit, being the frequency of non-elevator-private
  * requests very low.
  */
  goto start_rq;
@@ -4482,7 +4482,7 @@ static void bfq_completed_request(struct bfq_queue *bfqq, struct bfq_data *bfqd)
  bfq_schedule_dispatch(bfqd);
 }
 
-static void bfq_put_rq_priv_body(struct bfq_queue *bfqq)
+static void bfq_finish_request_body(struct bfq_queue *bfqq)
 {
  bfqq->allocated--;
 
@@ -4512,7 +4512,7 @@ static void bfq_finish_request(struct request *rq)
  spin_lock_irqsave(&bfqd->lock, flags);
 
  bfq_completed_request(bfqq, bfqd);
- bfq_put_rq_priv_body(bfqq);
+ bfq_finish_request_body(bfqq);
 
  spin_unlock_irqrestore(&bfqd->lock, flags);
  } else {
@@ -4533,7 +4533,7 @@ static void bfq_finish_request(struct request *rq)
  bfqg_stats_update_io_remove(bfqq_group(bfqq),
     rq->cmd_flags);
  }
- bfq_put_rq_priv_body(bfqq);
+ bfq_finish_request_body(bfqq);
  }
 
  rq->elv.priv[0] = NULL;
--
2.17.1


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[Bionic][PATCH 2/3] block, bfq: remove batches of confusing ifdefs

Wen-chien Jesse Sung
In reply to this post by Wen-chien Jesse Sung
From: Paolo Valente <[hidden email]>

BugLink: https://launchpad.net/bugs/1780066

Commit a33801e8b473 ("block, bfq: move debug blkio stats behind
CONFIG_DEBUG_BLK_CGROUP") introduced two batches of confusing ifdefs:
one reported in [1], plus a similar one in another function. This
commit removes both batches, in the way suggested in [1].

[1] https://www.spinics.net/lists/linux-block/msg20043.html

Fixes: a33801e8b473 ("block, bfq: move debug blkio stats behind CONFIG_DEBUG_BLK_CGROUP")
Reported-by: Linus Torvalds <[hidden email]>
Tested-by: Luca Miccio <[hidden email]>
Signed-off-by: Paolo Valente <[hidden email]>
Signed-off-by: Jens Axboe <[hidden email]>
(cherry picked from commit 9b25bd0368d562d1929059e8eb9de4102567b923)
Signed-off-by: Wen-chien Jesse Sung <[hidden email]>
---
 block/bfq-iosched.c | 127 +++++++++++++++++++++++++-------------------
 1 file changed, 72 insertions(+), 55 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 6da7f7131b95..eda5cc0843be 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -3689,35 +3689,16 @@ static struct request *__bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
  return rq;
 }
 
-static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
-{
- struct bfq_data *bfqd = hctx->queue->elevator->elevator_data;
- struct request *rq;
 #if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
- struct bfq_queue *in_serv_queue, *bfqq;
- bool waiting_rq, idle_timer_disabled;
-#endif
-
- spin_lock_irq(&bfqd->lock);
-
-#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
- in_serv_queue = bfqd->in_service_queue;
- waiting_rq = in_serv_queue && bfq_bfqq_wait_request(in_serv_queue);
-
- rq = __bfq_dispatch_request(hctx);
-
- idle_timer_disabled =
- waiting_rq && !bfq_bfqq_wait_request(in_serv_queue);
-
-#else
- rq = __bfq_dispatch_request(hctx);
-#endif
- spin_unlock_irq(&bfqd->lock);
+static void bfq_update_dispatch_stats(struct request_queue *q,
+      struct request *rq,
+      struct bfq_queue *in_serv_queue,
+      bool idle_timer_disabled)
+{
+ struct bfq_queue *bfqq = rq ? RQ_BFQQ(rq) : NULL;
 
-#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
- bfqq = rq ? RQ_BFQQ(rq) : NULL;
  if (!idle_timer_disabled && !bfqq)
- return rq;
+ return;
 
  /*
  * rq and bfqq are guaranteed to exist until this function
@@ -3732,7 +3713,7 @@ static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
  * In addition, the following queue lock guarantees that
  * bfqq_group(bfqq) exists as well.
  */
- spin_lock_irq(hctx->queue->queue_lock);
+ spin_lock_irq(q->queue_lock);
  if (idle_timer_disabled)
  /*
  * Since the idle timer has been disabled,
@@ -3751,9 +3732,37 @@ static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
  bfqg_stats_set_start_empty_time(bfqg);
  bfqg_stats_update_io_remove(bfqg, rq->cmd_flags);
  }
- spin_unlock_irq(hctx->queue->queue_lock);
+ spin_unlock_irq(q->queue_lock);
+}
+#else
+static inline void bfq_update_dispatch_stats(struct request_queue *q,
+     struct request *rq,
+     struct bfq_queue *in_serv_queue,
+     bool idle_timer_disabled) {}
 #endif
 
+static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
+{
+ struct bfq_data *bfqd = hctx->queue->elevator->elevator_data;
+ struct request *rq;
+ struct bfq_queue *in_serv_queue;
+ bool waiting_rq, idle_timer_disabled;
+
+ spin_lock_irq(&bfqd->lock);
+
+ in_serv_queue = bfqd->in_service_queue;
+ waiting_rq = in_serv_queue && bfq_bfqq_wait_request(in_serv_queue);
+
+ rq = __bfq_dispatch_request(hctx);
+
+ idle_timer_disabled =
+ waiting_rq && !bfq_bfqq_wait_request(in_serv_queue);
+
+ spin_unlock_irq(&bfqd->lock);
+
+ bfq_update_dispatch_stats(hctx->queue, rq, in_serv_queue,
+  idle_timer_disabled);
+
  return rq;
 }
 
@@ -4276,16 +4285,46 @@ static bool __bfq_insert_request(struct bfq_data *bfqd, struct request *rq)
  return idle_timer_disabled;
 }
 
+#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
+static void bfq_update_insert_stats(struct request_queue *q,
+    struct bfq_queue *bfqq,
+    bool idle_timer_disabled,
+    unsigned int cmd_flags)
+{
+ if (!bfqq)
+ return;
+
+ /*
+ * bfqq still exists, because it can disappear only after
+ * either it is merged with another queue, or the process it
+ * is associated with exits. But both actions must be taken by
+ * the same process currently executing this flow of
+ * instructions.
+ *
+ * In addition, the following queue lock guarantees that
+ * bfqq_group(bfqq) exists as well.
+ */
+ spin_lock_irq(q->queue_lock);
+ bfqg_stats_update_io_add(bfqq_group(bfqq), bfqq, cmd_flags);
+ if (idle_timer_disabled)
+ bfqg_stats_update_idle_time(bfqq_group(bfqq));
+ spin_unlock_irq(q->queue_lock);
+}
+#else
+static inline void bfq_update_insert_stats(struct request_queue *q,
+   struct bfq_queue *bfqq,
+   bool idle_timer_disabled,
+   unsigned int cmd_flags) {}
+#endif
+
 static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
        bool at_head)
 {
  struct request_queue *q = hctx->queue;
  struct bfq_data *bfqd = q->elevator->elevator_data;
-#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
  struct bfq_queue *bfqq = RQ_BFQQ(rq);
  bool idle_timer_disabled = false;
  unsigned int cmd_flags;
-#endif
 
  spin_lock_irq(&bfqd->lock);
  if (blk_mq_sched_try_insert_merge(q, rq)) {
@@ -4304,7 +4343,6 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
  else
  list_add_tail(&rq->queuelist, &bfqd->dispatch);
  } else {
-#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
  idle_timer_disabled = __bfq_insert_request(bfqd, rq);
  /*
  * Update bfqq, because, if a queue merge has occurred
@@ -4312,9 +4350,6 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
  * redirected into a new queue.
  */
  bfqq = RQ_BFQQ(rq);
-#else
- __bfq_insert_request(bfqd, rq);
-#endif
 
  if (rq_mergeable(rq)) {
  elv_rqhash_add(q, rq);
@@ -4323,35 +4358,17 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
  }
  }
 
-#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
  /*
  * Cache cmd_flags before releasing scheduler lock, because rq
  * may disappear afterwards (for example, because of a request
  * merge).
  */
  cmd_flags = rq->cmd_flags;
-#endif
+
  spin_unlock_irq(&bfqd->lock);
 
-#if defined(CONFIG_BFQ_GROUP_IOSCHED) && defined(CONFIG_DEBUG_BLK_CGROUP)
- if (!bfqq)
- return;
- /*
- * bfqq still exists, because it can disappear only after
- * either it is merged with another queue, or the process it
- * is associated with exits. But both actions must be taken by
- * the same process currently executing this flow of
- * instruction.
- *
- * In addition, the following queue lock guarantees that
- * bfqq_group(bfqq) exists as well.
- */
- spin_lock_irq(q->queue_lock);
- bfqg_stats_update_io_add(bfqq_group(bfqq), bfqq, cmd_flags);
- if (idle_timer_disabled)
- bfqg_stats_update_idle_time(bfqq_group(bfqq));
- spin_unlock_irq(q->queue_lock);
-#endif
+ bfq_update_insert_stats(q, bfqq, idle_timer_disabled,
+ cmd_flags);
 }
 
 static void bfq_insert_requests(struct blk_mq_hw_ctx *hctx,
--
2.17.1


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[Bionic][PATCH 3/3] block, bfq: add requeue-request hook

Wen-chien Jesse Sung
In reply to this post by Wen-chien Jesse Sung
From: Paolo Valente <[hidden email]>

BugLink: https://launchpad.net/bugs/1780066

Commit 'a6a252e64914 ("blk-mq-sched: decide how to handle flush rq via
RQF_FLUSH_SEQ")' makes all non-flush re-prepared requests for a device
be re-inserted into the active I/O scheduler for that device. As a
consequence, I/O schedulers may get the same request inserted again,
even several times, without a finish_request invoked on that request
before each re-insertion.

This fact is the cause of the failure reported in [1]. For an I/O
scheduler, every re-insertion of the same re-prepared request is
equivalent to the insertion of a new request. For schedulers like
mq-deadline or kyber, this fact causes no harm. In contrast, it
confuses a stateful scheduler like BFQ, which keeps state for an I/O
request, until the finish_request hook is invoked on the request. In
particular, BFQ may get stuck, waiting forever for the number of
request dispatches, of the same request, to be balanced by an equal
number of request completions (while there will be one completion for
that request). In this state, BFQ may refuse to serve I/O requests
from other bfq_queues. The hang reported in [1] then follows.

However, the above re-prepared requests undergo a requeue, thus the
requeue_request hook of the active elevator is invoked for these
requests, if set. This commit then addresses the above issue by
properly implementing the hook requeue_request in BFQ.

[1] https://marc.info/?l=linux-block&m=151211117608676

Reported-by: Ivan Kozik <[hidden email]>
Reported-by: Alban Browaeys <[hidden email]>
Tested-by: Mike Galbraith <[hidden email]>
Signed-off-by: Paolo Valente <[hidden email]>
Signed-off-by: Serena Ziviani <[hidden email]>
Signed-off-by: Jens Axboe <[hidden email]>
(cherry picked from commit a7877390614770965a6925dfed79cbd3eeeb61e0)
Signed-off-by: Wen-chien Jesse Sung <[hidden email]>
---
 block/bfq-iosched.c | 107 +++++++++++++++++++++++++++++++++-----------
 1 file changed, 82 insertions(+), 25 deletions(-)

diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index eda5cc0843be..d9ca9fb4e421 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -3630,24 +3630,26 @@ static struct request *__bfq_dispatch_request(struct blk_mq_hw_ctx *hctx)
  }
 
  /*
- * We exploit the bfq_finish_request hook to decrement
- * rq_in_driver, but bfq_finish_request will not be
- * invoked on this request. So, to avoid unbalance,
- * just start this request, without incrementing
- * rq_in_driver. As a negative consequence,
- * rq_in_driver is deceptively lower than it should be
- * while this request is in service. This may cause
- * bfq_schedule_dispatch to be invoked uselessly.
+ * We exploit the bfq_finish_requeue_request hook to
+ * decrement rq_in_driver, but
+ * bfq_finish_requeue_request will not be invoked on
+ * this request. So, to avoid unbalance, just start
+ * this request, without incrementing rq_in_driver. As
+ * a negative consequence, rq_in_driver is deceptively
+ * lower than it should be while this request is in
+ * service. This may cause bfq_schedule_dispatch to be
+ * invoked uselessly.
  *
  * As for implementing an exact solution, the
- * bfq_finish_request hook, if defined, is probably
- * invoked also on this request. So, by exploiting
- * this hook, we could 1) increment rq_in_driver here,
- * and 2) decrement it in bfq_finish_request. Such a
- * solution would let the value of the counter be
- * always accurate, but it would entail using an extra
- * interface function. This cost seems higher than the
- * benefit, being the frequency of non-elevator-private
+ * bfq_finish_requeue_request hook, if defined, is
+ * probably invoked also on this request. So, by
+ * exploiting this hook, we could 1) increment
+ * rq_in_driver here, and 2) decrement it in
+ * bfq_finish_requeue_request. Such a solution would
+ * let the value of the counter be always accurate,
+ * but it would entail using an extra interface
+ * function. This cost seems higher than the benefit,
+ * being the frequency of non-elevator-private
  * requests very low.
  */
  goto start_rq;
@@ -4317,6 +4319,8 @@ static inline void bfq_update_insert_stats(struct request_queue *q,
    unsigned int cmd_flags) {}
 #endif
 
+static void bfq_prepare_request(struct request *rq, struct bio *bio);
+
 static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
        bool at_head)
 {
@@ -4343,6 +4347,18 @@ static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
  else
  list_add_tail(&rq->queuelist, &bfqd->dispatch);
  } else {
+ if (WARN_ON_ONCE(!bfqq)) {
+ /*
+ * This should never happen. Most likely rq is
+ * a requeued regular request, being
+ * re-inserted without being first
+ * re-prepared. Do a prepare, to avoid
+ * failure.
+ */
+ bfq_prepare_request(rq, rq->bio);
+ bfqq = RQ_BFQQ(rq);
+ }
+
  idle_timer_disabled = __bfq_insert_request(bfqd, rq);
  /*
  * Update bfqq, because, if a queue merge has occurred
@@ -4499,22 +4515,44 @@ static void bfq_completed_request(struct bfq_queue *bfqq, struct bfq_data *bfqd)
  bfq_schedule_dispatch(bfqd);
 }
 
-static void bfq_finish_request_body(struct bfq_queue *bfqq)
+static void bfq_finish_requeue_request_body(struct bfq_queue *bfqq)
 {
  bfqq->allocated--;
 
  bfq_put_queue(bfqq);
 }
 
-static void bfq_finish_request(struct request *rq)
+/*
+ * Handle either a requeue or a finish for rq. The things to do are
+ * the same in both cases: all references to rq are to be dropped. In
+ * particular, rq is considered completed from the point of view of
+ * the scheduler.
+ */
+static void bfq_finish_requeue_request(struct request *rq)
 {
- struct bfq_queue *bfqq;
+ struct bfq_queue *bfqq = RQ_BFQQ(rq);
  struct bfq_data *bfqd;
 
- if (!rq->elv.icq)
+ /*
+ * Requeue and finish hooks are invoked in blk-mq without
+ * checking whether the involved request is actually still
+ * referenced in the scheduler. To handle this fact, the
+ * following two checks make this function exit in case of
+ * spurious invocations, for which there is nothing to do.
+ *
+ * First, check whether rq has nothing to do with an elevator.
+ */
+ if (unlikely(!(rq->rq_flags & RQF_ELVPRIV)))
+ return;
+
+ /*
+ * rq either is not associated with any icq, or is an already
+ * requeued request that has not (yet) been re-inserted into
+ * a bfq_queue.
+ */
+ if (!rq->elv.icq || !bfqq)
  return;
 
- bfqq = RQ_BFQQ(rq);
  bfqd = bfqq->bfqd;
 
  if (rq->rq_flags & RQF_STARTED)
@@ -4529,13 +4567,14 @@ static void bfq_finish_request(struct request *rq)
  spin_lock_irqsave(&bfqd->lock, flags);
 
  bfq_completed_request(bfqq, bfqd);
- bfq_finish_request_body(bfqq);
+ bfq_finish_requeue_request_body(bfqq);
 
  spin_unlock_irqrestore(&bfqd->lock, flags);
  } else {
  /*
  * Request rq may be still/already in the scheduler,
- * in which case we need to remove it. And we cannot
+ * in which case we need to remove it (this should
+ * never happen in case of requeue). And we cannot
  * defer such a check and removal, to avoid
  * inconsistencies in the time interval from the end
  * of this function to the start of the deferred work.
@@ -4550,9 +4589,26 @@ static void bfq_finish_request(struct request *rq)
  bfqg_stats_update_io_remove(bfqq_group(bfqq),
     rq->cmd_flags);
  }
- bfq_finish_request_body(bfqq);
+ bfq_finish_requeue_request_body(bfqq);
  }
 
+ /*
+ * Reset private fields. In case of a requeue, this allows
+ * this function to correctly do nothing if it is spuriously
+ * invoked again on this same request (see the check at the
+ * beginning of the function). Probably, a better general
+ * design would be to prevent blk-mq from invoking the requeue
+ * or finish hooks of an elevator, for a request that is not
+ * referred by that elevator.
+ *
+ * Resetting the following fields would break the
+ * request-insertion logic if rq is re-inserted into a bfq
+ * internal queue, without a re-preparation. Here we assume
+ * that re-insertions of requeued requests, without
+ * re-preparation, can happen only for pass_through or at_head
+ * requests (which are not re-inserted into bfq internal
+ * queues).
+ */
  rq->elv.priv[0] = NULL;
  rq->elv.priv[1] = NULL;
 }
@@ -5224,7 +5280,8 @@ static struct elv_fs_entry bfq_attrs[] = {
 static struct elevator_type iosched_bfq_mq = {
  .ops.mq = {
  .prepare_request = bfq_prepare_request,
- .finish_request = bfq_finish_request,
+ .requeue_request        = bfq_finish_requeue_request,
+ .finish_request = bfq_finish_requeue_request,
  .exit_icq = bfq_exit_icq,
  .insert_requests = bfq_insert_requests,
  .dispatch_request = bfq_dispatch_request,
--
2.17.1


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

ACK/mnt: [Bionic][PATCH 0/3] BFQ fixes

Kleber Souza
In reply to this post by Wen-chien Jesse Sung
On 07/05/18 15:24, Wen-chien Jesse Sung wrote:

> BugLink: https://launchpad.net/bugs/1780066
>
> Disk IO will hang some seconds after boot when using BFQ as io
> scheduler. This was also reported in some public MLs [1][2].
> The fix is already in mainline, but it's not available in 4.15.
>
> [1] https://lkml.org/lkml/2017/12/1/80
> [2] https://groups.google.com/forum/#!topic/bfq-iosched/nB2cVfDMSOU
>
> Chiara Bruschi (1):
>   block, bfq: fix occurrences of request finish method's old name
>
> Paolo Valente (2):
>   block, bfq: remove batches of confusing ifdefs
>   block, bfq: add requeue-request hook
>
>  block/bfq-iosched.c | 226 +++++++++++++++++++++++++++++---------------
>  1 file changed, 150 insertions(+), 76 deletions(-)
>

Hi Jesse,

The fixes look good, could you please add the SRU justification on the
bug report?

Thanks.


Acked-by: Kleber Sacilotto de Souza <[hidden email]>

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

Re: ACK/mnt: [Bionic][PATCH 0/3] BFQ fixes

Wen-chien Jesse Sung
On Fri, Jul 27, 2018 at 5:29 PM Kleber Souza <[hidden email]> wrote:

>
> On 07/05/18 15:24, Wen-chien Jesse Sung wrote:
> > BugLink: https://launchpad.net/bugs/1780066
> >
> > Disk IO will hang some seconds after boot when using BFQ as io
> > scheduler. This was also reported in some public MLs [1][2].
> > The fix is already in mainline, but it's not available in 4.15.
> >
> > [1] https://lkml.org/lkml/2017/12/1/80
> > [2] https://groups.google.com/forum/#!topic/bfq-iosched/nB2cVfDMSOU
> >
> > Chiara Bruschi (1):
> >   block, bfq: fix occurrences of request finish method's old name
> >
> > Paolo Valente (2):
> >   block, bfq: remove batches of confusing ifdefs
> >   block, bfq: add requeue-request hook
> >
> >  block/bfq-iosched.c | 226 +++++++++++++++++++++++++++++---------------
> >  1 file changed, 150 insertions(+), 76 deletions(-)
> >
>
> Hi Jesse,
>
> The fixes look good, could you please add the SRU justification on the
> bug report?

Hi Kleber,

SRU justification has been added. Please let me know if there's still
something missing.

Thanks,
Jesse

>
> Thanks.
>
>
> Acked-by: Kleber Sacilotto de Souza <[hidden email]>

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

ACK/cmnt: [Bionic][PATCH 0/3] BFQ fixes

Stefan Bader-2
On 30.07.2018 12:14, Jesse Sung wrote:

> On Fri, Jul 27, 2018 at 5:29 PM Kleber Souza <[hidden email]> wrote:
>>
>> On 07/05/18 15:24, Wen-chien Jesse Sung wrote:
>>> BugLink: https://launchpad.net/bugs/1780066
>>>
>>> Disk IO will hang some seconds after boot when using BFQ as io
>>> scheduler. This was also reported in some public MLs [1][2].
>>> The fix is already in mainline, but it's not available in 4.15.
>>>
>>> [1] https://lkml.org/lkml/2017/12/1/80
>>> [2] https://groups.google.com/forum/#!topic/bfq-iosched/nB2cVfDMSOU
>>>
>>> Chiara Bruschi (1):
>>>   block, bfq: fix occurrences of request finish method's old name
>>>
>>> Paolo Valente (2):
>>>   block, bfq: remove batches of confusing ifdefs
>>>   block, bfq: add requeue-request hook
>>>
>>>  block/bfq-iosched.c | 226 +++++++++++++++++++++++++++++---------------
>>>  1 file changed, 150 insertions(+), 76 deletions(-)
>>>
>>
>> Hi Jesse,
>>
>> The fixes look good, could you please add the SRU justification on the
>> bug report?
>
> Hi Kleber,
>
> SRU justification has been added. Please let me know if there's still
> something missing.
This is not really a proper SRU justification (hint: Impact/Fix/Testcase/Risk of
Regression). Patches themselves look ok to me but please complete the justification.

Acked-by: Stefan Bader <[hidden email]>

>
> Thanks,
> Jesse
>
>>
>> Thanks.
>>
>>
>> Acked-by: Kleber Sacilotto de Souza <[hidden email]>
>


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team

signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

APPLIED: [Bionic][PATCH 0/3] BFQ fixes

Kleber Souza
In reply to this post by Wen-chien Jesse Sung
On 07/05/18 15:24, Wen-chien Jesse Sung wrote:

> BugLink: https://launchpad.net/bugs/1780066
>
> Disk IO will hang some seconds after boot when using BFQ as io
> scheduler. This was also reported in some public MLs [1][2].
> The fix is already in mainline, but it's not available in 4.15.
>
> [1] https://lkml.org/lkml/2017/12/1/80
> [2] https://groups.google.com/forum/#!topic/bfq-iosched/nB2cVfDMSOU
>
> Chiara Bruschi (1):
>   block, bfq: fix occurrences of request finish method's old name
>
> Paolo Valente (2):
>   block, bfq: remove batches of confusing ifdefs
>   block, bfq: add requeue-request hook
>
>  block/bfq-iosched.c | 226 +++++++++++++++++++++++++++++---------------
>  1 file changed, 150 insertions(+), 76 deletions(-)
>

Applied to bionic/master-next branch.

Thanks,
Kleber

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team