[PATCH 0/1][SRU][G/U/OEM-5.10] Fix disk probing under SATA controller behind VMD

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

[PATCH 0/1][SRU][G/U/OEM-5.10] Fix disk probing under SATA controller behind VMD

You-Sheng Yang
BugLink: https://bugs.launchpad.net/bugs/1894778

[Impact]

When booting with a certain platforms with boot disk attached to SATA
bus behind Intel VMD controller, disk probing may fail with following
error messages left in dmesg:

  [ 6.163286] ata1.00: qc timeout (cmd 0xec)
  [ 6.165630] ata1.00: failed to IDENTIFY (I/O error, err_mask=0x4)

[Fix]

Upstream commit f6b7bb847ca8 ("PCI: vmd: Offset Client VMD MSI-X
vectors") currently in vanilla kernel tree for v5.11.

[Test Case]

Check dmesg/lsblk for disk probe.

For pci MSI address, check lspci output:

  $ lspci -vvnn
  ....
      Capabilities: [80] MSI: Enable+ Count=1/1 maskable- 64bit-
          Address: fee00000  Data: 0000

When it fails, the address is fee00000. And with a patched kernel:

  $ lspci -vvnn
  ....
      Capabilities: [80] MSI: Enable+ Count=1/1 maskable- 64bit-
          Address: fee01000  Data: 0000

[Where problems could occur]

An unpatched kernel will not be able to probe SATA controllers moved
behind VMD when VMD/RAID mode is enabled in BIOS, leaving disks
attached on it completely unusable. With this change, kernel would
then be able to probe them but may also suffer from issues that only
occur under such configuration. However, the worst case is to move away
sata disks from VMD bus as they are currently without this fix, so the
risk here should be justified.

Jon Derrick (1):
  PCI: vmd: Offset Client VMD MSI-X vectors

 drivers/pci/controller/vmd.c | 37 +++++++++++++++++++++++++-----------
 1 file changed, 26 insertions(+), 11 deletions(-)

--
2.29.2


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[PATCH 1/1][SRU][G] PCI: vmd: Offset Client VMD MSI-X vectors

You-Sheng Yang
From: Jon Derrick <[hidden email]>

BugLink: https://bugs.launchpad.net/bugs/1894778

Client VMD platforms have a software-triggered MSI-X vector 0 that will
not forward hardware-remapped MSI from the sub-device domain. This
causes an issue with VMD platforms that use AHCI behind VMD and have a
single MSI-X vector remapped to VMD vector 0. Add a VMD MSI-X vector
offset for these platforms.

Link: https://lore.kernel.org/r/20201102222223.92978-1-jonathan.derrick@...
Tested-by: Jian-Hong Pan <[hidden email]>
Signed-off-by: Jon Derrick <[hidden email]>
Signed-off-by: Lorenzo Pieralisi <[hidden email]>
(backported from commit f6b7bb847ca821a8aaa1b6da10ee65311e6f15bf)
Signed-off-by: You-Sheng Yang <[hidden email]>
---
 drivers/pci/controller/vmd.c | 37 +++++++++++++++++++++++++-----------
 1 file changed, 26 insertions(+), 11 deletions(-)

diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index ebec0a6e77ed..e03732959583 100644
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -47,6 +47,12 @@ enum vmd_features {
  * bus numbering
  */
  VMD_FEAT_HAS_BUS_RESTRICTIONS = (1 << 1),
+
+ /*
+ * Device may use MSI-X vector 0 for software triggering and will not
+ * be used for MSI remapping
+ */
+ VMD_FEAT_OFFSET_FIRST_VECTOR = (1 << 3),
 };
 
 /*
@@ -98,6 +104,7 @@ struct vmd_dev {
  struct irq_domain *irq_domain;
  struct pci_bus *bus;
  u8 busn_start;
+ u8 first_vec;
 };
 
 static inline struct vmd_dev *vmd_from_bus(struct pci_bus *bus)
@@ -193,11 +200,11 @@ static irq_hw_number_t vmd_get_hwirq(struct msi_domain_info *info,
  */
 static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *desc)
 {
- int i, best = 1;
  unsigned long flags;
+ int i, best;
 
- if (vmd->msix_count == 1)
- return &vmd->irqs[0];
+ if (vmd->msix_count == 1 + vmd->first_vec)
+ return &vmd->irqs[vmd->first_vec];
 
  /*
  * White list for fast-interrupt handlers. All others will share the
@@ -207,11 +214,12 @@ static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *d
  case PCI_CLASS_STORAGE_EXPRESS:
  break;
  default:
- return &vmd->irqs[0];
+ return &vmd->irqs[vmd->first_vec];
  }
 
  raw_spin_lock_irqsave(&list_lock, flags);
- for (i = 1; i < vmd->msix_count; i++)
+ best = vmd->first_vec + 1;
+ for (i = best; i < vmd->msix_count; i++)
  if (vmd->irqs[i].count < vmd->irqs[best].count)
  best = i;
  vmd->irqs[best].count++;
@@ -601,6 +609,7 @@ static irqreturn_t vmd_irq(int irq, void *data)
 
 static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
 {
+ unsigned long features = (unsigned long) id->driver_data;
  struct vmd_dev *vmd;
  int i, err;
 
@@ -625,12 +634,15 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
     dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(32)))
  return -ENODEV;
 
+ if (features & VMD_FEAT_OFFSET_FIRST_VECTOR)
+ vmd->first_vec = 1;
+
  vmd->msix_count = pci_msix_vec_count(dev);
  if (vmd->msix_count < 0)
  return -ENODEV;
 
- vmd->msix_count = pci_alloc_irq_vectors(dev, 1, vmd->msix_count,
- PCI_IRQ_MSIX);
+ vmd->msix_count = pci_alloc_irq_vectors(dev, vmd->first_vec + 1,
+ vmd->msix_count, PCI_IRQ_MSIX);
  if (vmd->msix_count < 0)
  return vmd->msix_count;
 
@@ -654,7 +666,7 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
 
  spin_lock_init(&vmd->cfg_lock);
  pci_set_drvdata(dev, vmd);
- err = vmd_enable_domain(vmd, (unsigned long) id->driver_data);
+ err = vmd_enable_domain(vmd, features);
  if (err)
  return err;
 
@@ -725,11 +737,14 @@ static const struct pci_device_id vmd_ids[] = {
  .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW |
  VMD_FEAT_HAS_BUS_RESTRICTIONS,},
  {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x467f),
- .driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS,},
+ .driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS |
+       VMD_FEAT_OFFSET_FIRST_VECTOR,},
  {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4c3d),
- .driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS,},
+ .driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS |
+       VMD_FEAT_OFFSET_FIRST_VECTOR,},
  {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_9A0B),
- .driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS,},
+ .driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS |
+       VMD_FEAT_OFFSET_FIRST_VECTOR,},
  {0,}
 };
 MODULE_DEVICE_TABLE(pci, vmd_ids);
--
2.29.2


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[PATCH 1/1][SRU][U/OEM-5.10] PCI: vmd: Offset Client VMD MSI-X vectors

You-Sheng Yang
In reply to this post by You-Sheng Yang
From: Jon Derrick <[hidden email]>

BugLink: https://bugs.launchpad.net/bugs/1894778

Client VMD platforms have a software-triggered MSI-X vector 0 that will
not forward hardware-remapped MSI from the sub-device domain. This
causes an issue with VMD platforms that use AHCI behind VMD and have a
single MSI-X vector remapped to VMD vector 0. Add a VMD MSI-X vector
offset for these platforms.

Link: https://lore.kernel.org/r/20201102222223.92978-1-jonathan.derrick@...
Tested-by: Jian-Hong Pan <[hidden email]>
Signed-off-by: Jon Derrick <[hidden email]>
Signed-off-by: Lorenzo Pieralisi <[hidden email]>
(cherry picked from commit f6b7bb847ca821a8aaa1b6da10ee65311e6f15bf)
Signed-off-by: You-Sheng Yang <[hidden email]>
---
 drivers/pci/controller/vmd.c | 37 +++++++++++++++++++++++++-----------
 1 file changed, 26 insertions(+), 11 deletions(-)

diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index f375c21ceeb1..c31e4d5cb146 100644
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -53,6 +53,12 @@ enum vmd_features {
  * vendor-specific capability space
  */
  VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP = (1 << 2),
+
+ /*
+ * Device may use MSI-X vector 0 for software triggering and will not
+ * be used for MSI remapping
+ */
+ VMD_FEAT_OFFSET_FIRST_VECTOR = (1 << 3),
 };
 
 /*
@@ -104,6 +110,7 @@ struct vmd_dev {
  struct irq_domain *irq_domain;
  struct pci_bus *bus;
  u8 busn_start;
+ u8 first_vec;
 };
 
 static inline struct vmd_dev *vmd_from_bus(struct pci_bus *bus)
@@ -199,11 +206,11 @@ static irq_hw_number_t vmd_get_hwirq(struct msi_domain_info *info,
  */
 static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *desc)
 {
- int i, best = 1;
  unsigned long flags;
+ int i, best;
 
- if (vmd->msix_count == 1)
- return &vmd->irqs[0];
+ if (vmd->msix_count == 1 + vmd->first_vec)
+ return &vmd->irqs[vmd->first_vec];
 
  /*
  * White list for fast-interrupt handlers. All others will share the
@@ -213,11 +220,12 @@ static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *d
  case PCI_CLASS_STORAGE_EXPRESS:
  break;
  default:
- return &vmd->irqs[0];
+ return &vmd->irqs[vmd->first_vec];
  }
 
  raw_spin_lock_irqsave(&list_lock, flags);
- for (i = 1; i < vmd->msix_count; i++)
+ best = vmd->first_vec + 1;
+ for (i = best; i < vmd->msix_count; i++)
  if (vmd->irqs[i].count < vmd->irqs[best].count)
  best = i;
  vmd->irqs[best].count++;
@@ -550,8 +558,8 @@ static int vmd_alloc_irqs(struct vmd_dev *vmd)
  if (vmd->msix_count < 0)
  return -ENODEV;
 
- vmd->msix_count = pci_alloc_irq_vectors(dev, 1, vmd->msix_count,
- PCI_IRQ_MSIX);
+ vmd->msix_count = pci_alloc_irq_vectors(dev, vmd->first_vec + 1,
+ vmd->msix_count, PCI_IRQ_MSIX);
  if (vmd->msix_count < 0)
  return vmd->msix_count;
 
@@ -719,6 +727,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
 
 static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
 {
+ unsigned long features = (unsigned long) id->driver_data;
  struct vmd_dev *vmd;
  int err;
 
@@ -743,13 +752,16 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
     dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(32)))
  return -ENODEV;
 
+ if (features & VMD_FEAT_OFFSET_FIRST_VECTOR)
+ vmd->first_vec = 1;
+
  err = vmd_alloc_irqs(vmd);
  if (err)
  return err;
 
  spin_lock_init(&vmd->cfg_lock);
  pci_set_drvdata(dev, vmd);
- err = vmd_enable_domain(vmd, (unsigned long) id->driver_data);
+ err = vmd_enable_domain(vmd, features);
  if (err)
  return err;
 
@@ -818,13 +830,16 @@ static const struct pci_device_id vmd_ids[] = {
  VMD_FEAT_HAS_BUS_RESTRICTIONS,},
  {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x467f),
  .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP |
- VMD_FEAT_HAS_BUS_RESTRICTIONS,},
+ VMD_FEAT_HAS_BUS_RESTRICTIONS |
+ VMD_FEAT_OFFSET_FIRST_VECTOR,},
  {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4c3d),
  .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP |
- VMD_FEAT_HAS_BUS_RESTRICTIONS,},
+ VMD_FEAT_HAS_BUS_RESTRICTIONS |
+ VMD_FEAT_OFFSET_FIRST_VECTOR,},
  {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_9A0B),
  .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP |
- VMD_FEAT_HAS_BUS_RESTRICTIONS,},
+ VMD_FEAT_HAS_BUS_RESTRICTIONS |
+ VMD_FEAT_OFFSET_FIRST_VECTOR,},
  {0,}
 };
 MODULE_DEVICE_TABLE(pci, vmd_ids);
--
2.29.2


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

APPLIED U: Re: [PATCH 1/1][SRU][U/OEM-5.10] PCI: vmd: Offset Client VMD MSI-X vectors

Paolo Pisati-5
On Tue, Dec 22, 2020 at 03:51:21PM +0800, You-Sheng Yang wrote:
> From: Jon Derrick <[hidden email]>
>
> BugLink: https://bugs.launchpad.net/bugs/1894778

Clean upstream cherry-pick.
--
bye,
p.

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

ACK: [PATCH 1/1][SRU][G] PCI: vmd: Offset Client VMD MSI-X vectors

Stefan Bader-2
In reply to this post by You-Sheng Yang
On 22.12.20 08:51, You-Sheng Yang wrote:

> From: Jon Derrick <[hidden email]>
>
> BugLink: https://bugs.launchpad.net/bugs/1894778
>
> Client VMD platforms have a software-triggered MSI-X vector 0 that will
> not forward hardware-remapped MSI from the sub-device domain. This
> causes an issue with VMD platforms that use AHCI behind VMD and have a
> single MSI-X vector remapped to VMD vector 0. Add a VMD MSI-X vector
> offset for these platforms.
>
> Link: https://lore.kernel.org/r/20201102222223.92978-1-jonathan.derrick@...
> Tested-by: Jian-Hong Pan <[hidden email]>
> Signed-off-by: Jon Derrick <[hidden email]>
> Signed-off-by: Lorenzo Pieralisi <[hidden email]>
> (backported from commit f6b7bb847ca821a8aaa1b6da10ee65311e6f15bf)
> Signed-off-by: You-Sheng Yang <[hidden email]>
Acked-by: Stefan Bader <[hidden email]>

> ---
>  drivers/pci/controller/vmd.c | 37 +++++++++++++++++++++++++-----------
>  1 file changed, 26 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> index ebec0a6e77ed..e03732959583 100644
> --- a/drivers/pci/controller/vmd.c
> +++ b/drivers/pci/controller/vmd.c
> @@ -47,6 +47,12 @@ enum vmd_features {
>   * bus numbering
>   */
>   VMD_FEAT_HAS_BUS_RESTRICTIONS = (1 << 1),
> +
> + /*
> + * Device may use MSI-X vector 0 for software triggering and will not
> + * be used for MSI remapping
> + */
> + VMD_FEAT_OFFSET_FIRST_VECTOR = (1 << 3),
>  };
>  
>  /*
> @@ -98,6 +104,7 @@ struct vmd_dev {
>   struct irq_domain *irq_domain;
>   struct pci_bus *bus;
>   u8 busn_start;
> + u8 first_vec;
>  };
>  
>  static inline struct vmd_dev *vmd_from_bus(struct pci_bus *bus)
> @@ -193,11 +200,11 @@ static irq_hw_number_t vmd_get_hwirq(struct msi_domain_info *info,
>   */
>  static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *desc)
>  {
> - int i, best = 1;
>   unsigned long flags;
> + int i, best;
>  
> - if (vmd->msix_count == 1)
> - return &vmd->irqs[0];
> + if (vmd->msix_count == 1 + vmd->first_vec)
> + return &vmd->irqs[vmd->first_vec];
>  
>   /*
>   * White list for fast-interrupt handlers. All others will share the
> @@ -207,11 +214,12 @@ static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *d
>   case PCI_CLASS_STORAGE_EXPRESS:
>   break;
>   default:
> - return &vmd->irqs[0];
> + return &vmd->irqs[vmd->first_vec];
>   }
>  
>   raw_spin_lock_irqsave(&list_lock, flags);
> - for (i = 1; i < vmd->msix_count; i++)
> + best = vmd->first_vec + 1;
> + for (i = best; i < vmd->msix_count; i++)
>   if (vmd->irqs[i].count < vmd->irqs[best].count)
>   best = i;
>   vmd->irqs[best].count++;
> @@ -601,6 +609,7 @@ static irqreturn_t vmd_irq(int irq, void *data)
>  
>  static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
>  {
> + unsigned long features = (unsigned long) id->driver_data;
>   struct vmd_dev *vmd;
>   int i, err;
>  
> @@ -625,12 +634,15 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
>      dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(32)))
>   return -ENODEV;
>  
> + if (features & VMD_FEAT_OFFSET_FIRST_VECTOR)
> + vmd->first_vec = 1;
> +
>   vmd->msix_count = pci_msix_vec_count(dev);
>   if (vmd->msix_count < 0)
>   return -ENODEV;
>  
> - vmd->msix_count = pci_alloc_irq_vectors(dev, 1, vmd->msix_count,
> - PCI_IRQ_MSIX);
> + vmd->msix_count = pci_alloc_irq_vectors(dev, vmd->first_vec + 1,
> + vmd->msix_count, PCI_IRQ_MSIX);
>   if (vmd->msix_count < 0)
>   return vmd->msix_count;
>  
> @@ -654,7 +666,7 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
>  
>   spin_lock_init(&vmd->cfg_lock);
>   pci_set_drvdata(dev, vmd);
> - err = vmd_enable_domain(vmd, (unsigned long) id->driver_data);
> + err = vmd_enable_domain(vmd, features);
>   if (err)
>   return err;
>  
> @@ -725,11 +737,14 @@ static const struct pci_device_id vmd_ids[] = {
>   .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW |
>   VMD_FEAT_HAS_BUS_RESTRICTIONS,},
>   {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x467f),
> - .driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS,},
> + .driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS |
> +       VMD_FEAT_OFFSET_FIRST_VECTOR,},
>   {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4c3d),
> - .driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS,},
> + .driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS |
> +       VMD_FEAT_OFFSET_FIRST_VECTOR,},
>   {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_9A0B),
> - .driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS,},
> + .driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS |
> +       VMD_FEAT_OFFSET_FIRST_VECTOR,},
>   {0,}
>  };
>  MODULE_DEVICE_TABLE(pci, vmd_ids);
>


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

ACK: [PATCH 1/1][SRU][G] PCI: vmd: Offset Client VMD MSI-X vectors

William Breathitt Gray
In reply to this post by You-Sheng Yang
On Tue, Dec 22, 2020 at 03:51:20PM +0800, You-Sheng Yang wrote:

> From: Jon Derrick <[hidden email]>
>
> BugLink: https://bugs.launchpad.net/bugs/1894778
>
> Client VMD platforms have a software-triggered MSI-X vector 0 that will
> not forward hardware-remapped MSI from the sub-device domain. This
> causes an issue with VMD platforms that use AHCI behind VMD and have a
> single MSI-X vector remapped to VMD vector 0. Add a VMD MSI-X vector
> offset for these platforms.
>
> Link: https://lore.kernel.org/r/20201102222223.92978-1-jonathan.derrick@...
> Tested-by: Jian-Hong Pan <[hidden email]>
> Signed-off-by: Jon Derrick <[hidden email]>
> Signed-off-by: Lorenzo Pieralisi <[hidden email]>
> (backported from commit f6b7bb847ca821a8aaa1b6da10ee65311e6f15bf)
> Signed-off-by: You-Sheng Yang <[hidden email]>
Acked-by: William Breathitt Gray <[hidden email]>

> ---
>  drivers/pci/controller/vmd.c | 37 +++++++++++++++++++++++++-----------
>  1 file changed, 26 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> index ebec0a6e77ed..e03732959583 100644
> --- a/drivers/pci/controller/vmd.c
> +++ b/drivers/pci/controller/vmd.c
> @@ -47,6 +47,12 @@ enum vmd_features {
>   * bus numbering
>   */
>   VMD_FEAT_HAS_BUS_RESTRICTIONS = (1 << 1),
> +
> + /*
> + * Device may use MSI-X vector 0 for software triggering and will not
> + * be used for MSI remapping
> + */
> + VMD_FEAT_OFFSET_FIRST_VECTOR = (1 << 3),
>  };
>  
>  /*
> @@ -98,6 +104,7 @@ struct vmd_dev {
>   struct irq_domain *irq_domain;
>   struct pci_bus *bus;
>   u8 busn_start;
> + u8 first_vec;
>  };
>  
>  static inline struct vmd_dev *vmd_from_bus(struct pci_bus *bus)
> @@ -193,11 +200,11 @@ static irq_hw_number_t vmd_get_hwirq(struct msi_domain_info *info,
>   */
>  static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *desc)
>  {
> - int i, best = 1;
>   unsigned long flags;
> + int i, best;
>  
> - if (vmd->msix_count == 1)
> - return &vmd->irqs[0];
> + if (vmd->msix_count == 1 + vmd->first_vec)
> + return &vmd->irqs[vmd->first_vec];
>  
>   /*
>   * White list for fast-interrupt handlers. All others will share the
> @@ -207,11 +214,12 @@ static struct vmd_irq_list *vmd_next_irq(struct vmd_dev *vmd, struct msi_desc *d
>   case PCI_CLASS_STORAGE_EXPRESS:
>   break;
>   default:
> - return &vmd->irqs[0];
> + return &vmd->irqs[vmd->first_vec];
>   }
>  
>   raw_spin_lock_irqsave(&list_lock, flags);
> - for (i = 1; i < vmd->msix_count; i++)
> + best = vmd->first_vec + 1;
> + for (i = best; i < vmd->msix_count; i++)
>   if (vmd->irqs[i].count < vmd->irqs[best].count)
>   best = i;
>   vmd->irqs[best].count++;
> @@ -601,6 +609,7 @@ static irqreturn_t vmd_irq(int irq, void *data)
>  
>  static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
>  {
> + unsigned long features = (unsigned long) id->driver_data;
>   struct vmd_dev *vmd;
>   int i, err;
>  
> @@ -625,12 +634,15 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
>      dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(32)))
>   return -ENODEV;
>  
> + if (features & VMD_FEAT_OFFSET_FIRST_VECTOR)
> + vmd->first_vec = 1;
> +
>   vmd->msix_count = pci_msix_vec_count(dev);
>   if (vmd->msix_count < 0)
>   return -ENODEV;
>  
> - vmd->msix_count = pci_alloc_irq_vectors(dev, 1, vmd->msix_count,
> - PCI_IRQ_MSIX);
> + vmd->msix_count = pci_alloc_irq_vectors(dev, vmd->first_vec + 1,
> + vmd->msix_count, PCI_IRQ_MSIX);
>   if (vmd->msix_count < 0)
>   return vmd->msix_count;
>  
> @@ -654,7 +666,7 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
>  
>   spin_lock_init(&vmd->cfg_lock);
>   pci_set_drvdata(dev, vmd);
> - err = vmd_enable_domain(vmd, (unsigned long) id->driver_data);
> + err = vmd_enable_domain(vmd, features);
>   if (err)
>   return err;
>  
> @@ -725,11 +737,14 @@ static const struct pci_device_id vmd_ids[] = {
>   .driver_data = VMD_FEAT_HAS_MEMBAR_SHADOW |
>   VMD_FEAT_HAS_BUS_RESTRICTIONS,},
>   {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x467f),
> - .driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS,},
> + .driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS |
> +       VMD_FEAT_OFFSET_FIRST_VECTOR,},
>   {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x4c3d),
> - .driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS,},
> + .driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS |
> +       VMD_FEAT_OFFSET_FIRST_VECTOR,},
>   {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_VMD_9A0B),
> - .driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS,},
> + .driver_data = VMD_FEAT_HAS_BUS_RESTRICTIONS |
> +       VMD_FEAT_OFFSET_FIRST_VECTOR,},
>   {0,}
>  };
>  MODULE_DEVICE_TABLE(pci, vmd_ids);
> --
> 2.29.2
>
>
> --
> kernel-team mailing list
> [hidden email]
> https://lists.ubuntu.com/mailman/listinfo/kernel-team

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

APPLIED[G]: [PATCH 0/1][SRU][G/U/OEM-5.10] Fix disk probing under SATA controller behind VMD

Kelsey Skunberg
In reply to this post by You-Sheng Yang
Applied to Groovy master-next. thank you!

-Kelsey

On 2020-12-22 15:51:19 , You-Sheng Yang wrote:

> BugLink: https://bugs.launchpad.net/bugs/1894778
>
> [Impact]
>
> When booting with a certain platforms with boot disk attached to SATA
> bus behind Intel VMD controller, disk probing may fail with following
> error messages left in dmesg:
>
>   [ 6.163286] ata1.00: qc timeout (cmd 0xec)
>   [ 6.165630] ata1.00: failed to IDENTIFY (I/O error, err_mask=0x4)
>
> [Fix]
>
> Upstream commit f6b7bb847ca8 ("PCI: vmd: Offset Client VMD MSI-X
> vectors") currently in vanilla kernel tree for v5.11.
>
> [Test Case]
>
> Check dmesg/lsblk for disk probe.
>
> For pci MSI address, check lspci output:
>
>   $ lspci -vvnn
>   ....
>       Capabilities: [80] MSI: Enable+ Count=1/1 maskable- 64bit-
>           Address: fee00000  Data: 0000
>
> When it fails, the address is fee00000. And with a patched kernel:
>
>   $ lspci -vvnn
>   ....
>       Capabilities: [80] MSI: Enable+ Count=1/1 maskable- 64bit-
>           Address: fee01000  Data: 0000
>
> [Where problems could occur]
>
> An unpatched kernel will not be able to probe SATA controllers moved
> behind VMD when VMD/RAID mode is enabled in BIOS, leaving disks
> attached on it completely unusable. With this change, kernel would
> then be able to probe them but may also suffer from issues that only
> occur under such configuration. However, the worst case is to move away
> sata disks from VMD bus as they are currently without this fix, so the
> risk here should be justified.
>
> Jon Derrick (1):
>   PCI: vmd: Offset Client VMD MSI-X vectors
>
>  drivers/pci/controller/vmd.c | 37 +++++++++++++++++++++++++-----------
>  1 file changed, 26 insertions(+), 11 deletions(-)
>
> --
> 2.29.2
>
>
> --
> kernel-team mailing list
> [hidden email]
> https://lists.ubuntu.com/mailman/listinfo/kernel-team

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team