[SRU][Bionic][Cosmic][PATCH 0/3] Fixes for LP:1792195

Next Topic
 
classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

[SRU][Bionic][Cosmic][PATCH 0/3] Fixes for LP:1792195

Joseph Salisbury-3
BugLink: https://bugs.launchpad.net/bugs/1792195

== SRU Justification ==
IBM is requesting these commits in bionic and cosmic.  These commits
also rely on commit 7acf50e4efa6, which was SRU'd in bug 1792102.  The
first patch, commit 2bf1071a8d50, was backported by IBM themself.  

Description of bug:
GPFS mmfsd daemon is mapping shared tracing buffer(allocated from kernel
driver using vmalloc) and then writing trace records from user space threads
in parallel. While the SIGBUS happened, the access virtual memory address
is in the mapped range, no overflow on access.

The root cause is that for PTEs created by a driver at mmap time (ie, that
aren't created dynamically at fault time), it's not legit for ptep_set_access_flags()
to make them invalid even temporarily. A concurrent access while they are
invalid will be unable to service the page fault and will cause as SIGBUS.


== Fixes ==
2bf1071a8d50 ("powerpc/64s: Remove POWER9 DD1 support")
bd0dbb73e013 ("powerpc/mm/books3s: Add new pte bit to mark pte temporarily invalid.")
f08d08f3db55 ("powerpc/mm/radix: Only need the Nest MMU workaround for R -> RW transition")


== Regression Potential ==
Low.  Limited to powerpc.

== Test Case ==
A test kernel was built with these patches and tested by IBM.
IBM states the test kernel resolved the bug.

Aneesh Kumar K.V (2):
  powerpc/mm/books3s: Add new pte bit to mark pte temporarily invalid.
  powerpc/mm/radix: Only need the Nest MMU workaround for R -> RW
    transition

Nicholas Piggin (1):
  powerpc/64s: Remove POWER9 DD1 support

 arch/powerpc/include/asm/book3s/64/hugetlb.h       | 20 -------
 arch/powerpc/include/asm/book3s/64/pgtable.h       | 23 ++++++--
 arch/powerpc/include/asm/book3s/64/radix.h         | 35 ++----------
 .../powerpc/include/asm/book3s/64/tlbflush-radix.h |  2 -
 arch/powerpc/include/asm/cputable.h                |  6 +-
 arch/powerpc/include/asm/paca.h                    |  5 --
 arch/powerpc/kernel/asm-offsets.c                  |  1 -
 arch/powerpc/kernel/cputable.c                     | 20 -------
 arch/powerpc/kernel/dt_cpu_ftrs.c                  | 13 +++--
 arch/powerpc/kernel/exceptions-64s.S               |  4 +-
 arch/powerpc/kernel/idle_book3s.S                  | 50 ----------------
 arch/powerpc/kernel/process.c                      | 10 +---
 arch/powerpc/kvm/book3s_64_mmu_radix.c             | 15 +----
 arch/powerpc/kvm/book3s_hv.c                       | 10 ----
 arch/powerpc/kvm/book3s_hv_rmhandlers.S            | 16 +-----
 arch/powerpc/kvm/book3s_xive_template.c            | 39 +++++--------
 arch/powerpc/mm/hash_utils_64.c                    | 30 ----------
 arch/powerpc/mm/hugetlbpage.c                      |  9 +--
 arch/powerpc/mm/mmu_context_book3s64.c             | 12 +---
 arch/powerpc/mm/pgtable-radix.c                    | 66 ++--------------------
 arch/powerpc/mm/tlb-radix.c                        | 18 ------
 arch/powerpc/perf/core-book3s.c                    | 34 -----------
 arch/powerpc/perf/isa207-common.c                  | 12 ++--
 arch/powerpc/perf/isa207-common.h                  |  5 --
 arch/powerpc/perf/power9-pmu.c                     | 54 +-----------------
 arch/powerpc/platforms/powernv/idle.c              | 27 ---------
 arch/powerpc/platforms/powernv/smp.c               | 27 +--------
 arch/powerpc/sysdev/xive/common.c                  |  8 +--
 drivers/misc/cxl/cxl.h                             |  8 ---
 drivers/misc/cxl/cxllib.c                          |  4 --
 drivers/misc/cxl/pci.c                             | 41 ++++++--------
 31 files changed, 91 insertions(+), 533 deletions(-)

--
2.7.4


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[SRU][Bionic][Cosmic][PATCH 1/3] powerpc/64s: Remove POWER9 DD1 support

Joseph Salisbury-3
From: Nicholas Piggin <[hidden email]>

BugLink: https://bugs.launchpad.net/bugs/1792195

POWER9 DD1 was never a product. It is no longer supported by upstream
firmware, and it is not effectively supported in Linux due to lack of
testing.

Signed-off-by: Nicholas Piggin <[hidden email]>
Reviewed-by: Michael Ellerman <[hidden email]>
[mpe: Remove arch_make_huge_pte() entirely]
Signed-off-by: Michael Ellerman <[hidden email]>
(backported from commit 2bf1071a8d50928a4ae366bb3108833166c2b70c)
Signed-off-by: Michael Ranweiler <[hidden email]>
Signed-off-by: Joseph Salisbury <[hidden email]>
---
 arch/powerpc/include/asm/book3s/64/hugetlb.h       | 20 --------
 arch/powerpc/include/asm/book3s/64/pgtable.h       |  5 +-
 arch/powerpc/include/asm/book3s/64/radix.h         | 35 ++-----------
 .../powerpc/include/asm/book3s/64/tlbflush-radix.h |  2 -
 arch/powerpc/include/asm/cputable.h                |  6 +--
 arch/powerpc/include/asm/paca.h                    |  5 --
 arch/powerpc/kernel/asm-offsets.c                  |  1 -
 arch/powerpc/kernel/cputable.c                     | 20 --------
 arch/powerpc/kernel/dt_cpu_ftrs.c                  | 13 +++--
 arch/powerpc/kernel/exceptions-64s.S               |  4 +-
 arch/powerpc/kernel/idle_book3s.S                  | 50 ------------------
 arch/powerpc/kernel/process.c                      | 10 +---
 arch/powerpc/kvm/book3s_64_mmu_radix.c             | 15 +-----
 arch/powerpc/kvm/book3s_hv.c                       | 10 ----
 arch/powerpc/kvm/book3s_hv_rmhandlers.S            | 16 +-----
 arch/powerpc/kvm/book3s_xive_template.c            | 39 +++++---------
 arch/powerpc/mm/hash_utils_64.c                    | 30 -----------
 arch/powerpc/mm/hugetlbpage.c                      |  9 ++--
 arch/powerpc/mm/mmu_context_book3s64.c             | 12 +----
 arch/powerpc/mm/pgtable-radix.c                    | 60 +---------------------
 arch/powerpc/mm/tlb-radix.c                        | 18 -------
 arch/powerpc/perf/core-book3s.c                    | 34 ------------
 arch/powerpc/perf/isa207-common.c                  | 12 ++---
 arch/powerpc/perf/isa207-common.h                  |  5 --
 arch/powerpc/perf/power9-pmu.c                     | 54 +------------------
 arch/powerpc/platforms/powernv/idle.c              | 27 ----------
 arch/powerpc/platforms/powernv/smp.c               | 27 ++--------
 arch/powerpc/sysdev/xive/common.c                  |  8 +--
 drivers/misc/cxl/cxl.h                             |  8 ---
 drivers/misc/cxl/cxllib.c                          |  4 --
 drivers/misc/cxl/pci.c                             | 41 ++++++---------
 31 files changed, 70 insertions(+), 530 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/hugetlb.h b/arch/powerpc/include/asm/book3s/64/hugetlb.h
index c459f93..5088838 100644
--- a/arch/powerpc/include/asm/book3s/64/hugetlb.h
+++ b/arch/powerpc/include/asm/book3s/64/hugetlb.h
@@ -32,26 +32,6 @@ static inline int hstate_get_psize(struct hstate *hstate)
  }
 }
 
-#define arch_make_huge_pte arch_make_huge_pte
-static inline pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
-       struct page *page, int writable)
-{
- unsigned long page_shift;
-
- if (!cpu_has_feature(CPU_FTR_POWER9_DD1))
- return entry;
-
- page_shift = huge_page_shift(hstate_vma(vma));
- /*
- * We don't support 1G hugetlb pages yet.
- */
- VM_WARN_ON(page_shift == mmu_psize_defs[MMU_PAGE_1G].shift);
- if (page_shift == mmu_psize_defs[MMU_PAGE_2M].shift)
- return __pte(pte_val(entry) | R_PAGE_LARGE);
- else
- return entry;
-}
-
 #ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE
 static inline bool gigantic_page_supported(void)
 {
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index bddf18a..674990c 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -454,9 +454,8 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
 {
  if (full && radix_enabled()) {
  /*
- * Let's skip the DD1 style pte update here. We know that
- * this is a full mm pte clear and hence can be sure there is
- * no parallel set_pte.
+ * We know that this is a full mm pte clear and
+ * hence can be sure there is no parallel set_pte.
  */
  return radix__ptep_get_and_clear_full(mm, addr, ptep, full);
  }
diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h
index 2509344..eaa4591 100644
--- a/arch/powerpc/include/asm/book3s/64/radix.h
+++ b/arch/powerpc/include/asm/book3s/64/radix.h
@@ -12,12 +12,6 @@
 #include <asm/book3s/64/radix-4k.h>
 #endif
 
-/*
- * For P9 DD1 only, we need to track whether the pte's huge.
- */
-#define R_PAGE_LARGE _RPAGE_RSV1
-
-
 #ifndef __ASSEMBLY__
 #include <asm/book3s/64/tlbflush-radix.h>
 #include <asm/cpu_has_feature.h>
@@ -153,20 +147,7 @@ static inline unsigned long radix__pte_update(struct mm_struct *mm,
 {
  unsigned long old_pte;
 
- if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
-
- unsigned long new_pte;
-
- old_pte = __radix_pte_update(ptep, ~0ul, 0);
- /*
- * new value of pte
- */
- new_pte = (old_pte | set) & ~clr;
- radix__flush_tlb_pte_p9_dd1(old_pte, mm, addr);
- if (new_pte)
- __radix_pte_update(ptep, 0, new_pte);
- } else
- old_pte = __radix_pte_update(ptep, clr, set);
+ old_pte = __radix_pte_update(ptep, clr, set);
  if (!huge)
  assert_pte_locked(mm, addr);
 
@@ -241,8 +222,6 @@ static inline int radix__pmd_trans_huge(pmd_t pmd)
 
 static inline pmd_t radix__pmd_mkhuge(pmd_t pmd)
 {
- if (cpu_has_feature(CPU_FTR_POWER9_DD1))
- return __pmd(pmd_val(pmd) | _PAGE_PTE | R_PAGE_LARGE);
  return __pmd(pmd_val(pmd) | _PAGE_PTE);
 }
 static inline void radix__pmdp_huge_split_prepare(struct vm_area_struct *vma,
@@ -279,18 +258,14 @@ static inline unsigned long radix__get_tree_size(void)
  unsigned long rts_field;
  /*
  * We support 52 bits, hence:
- *  DD1    52-28 = 24, 0b11000
- *  Others 52-31 = 21, 0b10101
+ * bits 52 - 31 = 21, 0b10101
  * RTS encoding details
  * bits 0 - 3 of rts -> bits 6 - 8 unsigned long
  * bits 4 - 5 of rts -> bits 62 - 63 of unsigned long
  */
- if (cpu_has_feature(CPU_FTR_POWER9_DD1))
- rts_field = (0x3UL << 61);
- else {
- rts_field = (0x5UL << 5); /* 6 - 8 bits */
- rts_field |= (0x2UL << 61);
- }
+ rts_field = (0x5UL << 5); /* 6 - 8 bits */
+ rts_field |= (0x2UL << 61);
+
  return rts_field;
 }
 
diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h b/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
index 6a9e680..a0fe684 100644
--- a/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
+++ b/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
@@ -45,6 +45,4 @@ extern void radix__flush_tlb_lpid_va(unsigned long lpid, unsigned long gpa,
      unsigned long page_size);
 extern void radix__flush_tlb_lpid(unsigned long lpid);
 extern void radix__flush_tlb_all(void);
-extern void radix__flush_tlb_pte_p9_dd1(unsigned long old_pte, struct mm_struct *mm,
- unsigned long address);
 #endif
diff --git a/arch/powerpc/include/asm/cputable.h b/arch/powerpc/include/asm/cputable.h
index 82ca727..aab3b68 100644
--- a/arch/powerpc/include/asm/cputable.h
+++ b/arch/powerpc/include/asm/cputable.h
@@ -222,7 +222,6 @@ enum {
 #define CPU_FTR_DAWR LONG_ASM_CONST(0x0000008000000000)
 #define CPU_FTR_DABRX LONG_ASM_CONST(0x0000010000000000)
 #define CPU_FTR_PMAO_BUG LONG_ASM_CONST(0x0000020000000000)
-#define CPU_FTR_POWER9_DD1 LONG_ASM_CONST(0x0000040000000000)
 #define CPU_FTR_POWER9_DD2_1 LONG_ASM_CONST(0x0000080000000000)
 #define CPU_FTR_P9_TM_HV_ASSIST LONG_ASM_CONST(0x0000100000000000)
 #define CPU_FTR_P9_TM_XER_SO_BUG LONG_ASM_CONST(0x0000200000000000)
@@ -480,8 +479,6 @@ enum {
     CPU_FTR_DBELL | CPU_FTR_HAS_PPR | CPU_FTR_DAWR | \
     CPU_FTR_ARCH_207S | CPU_FTR_TM_COMP | CPU_FTR_ARCH_300 | \
     CPU_FTR_P9_TLBIE_BUG | CPU_FTR_P9_TIDR)
-#define CPU_FTRS_POWER9_DD1 ((CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD1) & \
-     (~CPU_FTR_SAO))
 #define CPU_FTRS_POWER9_DD2_0 CPU_FTRS_POWER9
 #define CPU_FTRS_POWER9_DD2_1 (CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD2_1)
 #define CPU_FTRS_POWER9_DD2_2 (CPU_FTRS_POWER9 | CPU_FTR_P9_TM_HV_ASSIST | \
@@ -505,8 +502,7 @@ enum {
      CPU_FTRS_POWER6 | CPU_FTRS_POWER7 | CPU_FTRS_POWER8E | \
      CPU_FTRS_POWER8 | CPU_FTRS_POWER8_DD1 | CPU_FTRS_CELL | \
      CPU_FTRS_PA6T | CPU_FTR_VSX | CPU_FTRS_POWER9 | \
-     CPU_FTRS_POWER9_DD1 | CPU_FTRS_POWER9_DD2_1 | \
-     CPU_FTRS_POWER9_DD2_2)
+     CPU_FTRS_POWER9_DD2_1 | CPU_FTRS_POWER9_DD2_2)
 #endif
 #else
 enum {
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index b3ec196..da6a25f 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -184,11 +184,6 @@ struct paca_struct {
  u8 subcore_sibling_mask;
  /* Flag to request this thread not to stop */
  atomic_t dont_stop;
- /*
- * Pointer to an array which contains pointer
- * to the sibling threads' paca.
- */
- struct paca_struct **thread_sibling_pacas;
  /* The PSSCR value that the kernel requested before going to stop */
  u64 requested_psscr;
 
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index a65c54c..7e1cbc8 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -754,7 +754,6 @@ int main(void)
  OFFSET(PACA_THREAD_IDLE_STATE, paca_struct, thread_idle_state);
  OFFSET(PACA_THREAD_MASK, paca_struct, thread_mask);
  OFFSET(PACA_SUBCORE_SIBLING_MASK, paca_struct, subcore_sibling_mask);
- OFFSET(PACA_SIBLING_PACA_PTRS, paca_struct, thread_sibling_pacas);
  OFFSET(PACA_REQ_PSSCR, paca_struct, requested_psscr);
  OFFSET(PACA_DONT_STOP, paca_struct, dont_stop);
 #define STOP_SPR(x, f) OFFSET(x, paca_struct, stop_sprs.f)
diff --git a/arch/powerpc/kernel/cputable.c b/arch/powerpc/kernel/cputable.c
index bc2b461..13acd1c 100644
--- a/arch/powerpc/kernel/cputable.c
+++ b/arch/powerpc/kernel/cputable.c
@@ -527,26 +527,6 @@ static struct cpu_spec __initdata cpu_specs[] = {
  .machine_check_early = __machine_check_early_realmode_p8,
  .platform = "power8",
  },
- { /* Power9 DD1*/
- .pvr_mask = 0xffffff00,
- .pvr_value = 0x004e0100,
- .cpu_name = "POWER9 (raw)",
- .cpu_features = CPU_FTRS_POWER9_DD1,
- .cpu_user_features = COMMON_USER_POWER9,
- .cpu_user_features2 = COMMON_USER2_POWER9,
- .mmu_features = MMU_FTRS_POWER9,
- .icache_bsize = 128,
- .dcache_bsize = 128,
- .num_pmcs = 6,
- .pmc_type = PPC_PMC_IBM,
- .oprofile_cpu_type = "ppc64/power9",
- .oprofile_type = PPC_OPROFILE_INVALID,
- .cpu_setup = __setup_cpu_power9,
- .cpu_restore = __restore_cpu_power9,
- .flush_tlb = __flush_tlb_power9,
- .machine_check_early = __machine_check_early_realmode_p9,
- .platform = "power9",
- },
  { /* Power9 DD2.0 */
  .pvr_mask = 0xffffefff,
  .pvr_value = 0x004e0200,
diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
index fa7f063..350ea04 100644
--- a/arch/powerpc/kernel/dt_cpu_ftrs.c
+++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
@@ -741,13 +741,16 @@ static __init void cpufeatures_cpu_quirks(void)
  /*
  * Not all quirks can be derived from the cpufeatures device tree.
  */
- if ((version & 0xffffff00) == 0x004e0100)
- cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD1;
+ if ((version & 0xffffefff) == 0x004e0200)
+ ; /* DD2.0 has no feature flag */
  else if ((version & 0xffffefff) == 0x004e0201)
  cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1;
- else if ((version & 0xffffefff) == 0x004e0202)
- cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_HV_ASSIST |
- CPU_FTR_P9_TM_XER_SO_BUG;
+ else if ((version & 0xffffefff) == 0x004e0202) {
+ cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_HV_ASSIST;
+ cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_XER_SO_BUG;
+ cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1;
+ } else /* DD2.1 and up have DD2_1 */
+ cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1;
 
  if ((version & 0xffff0000) == 0x004e0000) {
  cur_cpu_spec->cpu_features |= CPU_FTR_P9_TLBIE_BUG;
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 59f5cfa..724bd35 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -276,9 +276,7 @@ BEGIN_FTR_SECTION
  *
  * This interrupt can wake directly from idle. If that is the case,
  * the machine check is handled then the idle wakeup code is called
- * to restore state. In that case, the POWER9 DD1 idle PACA workaround
- * is not applied in the early machine check code, which will cause
- * bugs.
+ * to restore state.
  */
  mr r11,r1 /* Save r1 */
  lhz r10,PACA_IN_MCE(r13)
diff --git a/arch/powerpc/kernel/idle_book3s.S b/arch/powerpc/kernel/idle_book3s.S
index f3ac31c..49439fc 100644
--- a/arch/powerpc/kernel/idle_book3s.S
+++ b/arch/powerpc/kernel/idle_book3s.S
@@ -455,43 +455,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_P9_TM_XER_SO_BUG)
  blr /* return 0 for wakeup cause / SRR1 value */
 
 /*
- * On waking up from stop 0,1,2 with ESL=1 on POWER9 DD1,
- * HSPRG0 will be set to the HSPRG0 value of one of the
- * threads in this core. Thus the value we have in r13
- * may not be this thread's paca pointer.
- *
- * Fortunately, the TIR remains invariant. Since this thread's
- * paca pointer is recorded in all its sibling's paca, we can
- * correctly recover this thread's paca pointer if we
- * know the index of this thread in the core.
- *
- * This index can be obtained from the TIR.
- *
- * i.e, thread's position in the core = TIR.
- * If this value is i, then this thread's paca is
- * paca->thread_sibling_pacas[i].
- */
-power9_dd1_recover_paca:
- mfspr r4, SPRN_TIR
- /*
- * Since each entry in thread_sibling_pacas is 8 bytes
- * we need to left-shift by 3 bits. Thus r4 = i * 8
- */
- sldi r4, r4, 3
- /* Get &paca->thread_sibling_pacas[0] in r5 */
- ld r5, PACA_SIBLING_PACA_PTRS(r13)
- /* Load paca->thread_sibling_pacas[i] into r13 */
- ldx r13, r4, r5
- SET_PACA(r13)
- /*
- * Indicate that we have lost NVGPR state
- * which needs to be restored from the stack.
- */
- li r3, 1
- stb r3,PACA_NAPSTATELOST(r13)
- blr
-
-/*
  * Called from machine check handler for powersave wakeups.
  * Low level machine check processing has already been done. Now just
  * go through the wake up path to get everything in order.
@@ -525,9 +488,6 @@ pnv_powersave_wakeup:
  ld r2, PACATOC(r13)
 
 BEGIN_FTR_SECTION
-BEGIN_FTR_SECTION_NESTED(70)
- bl power9_dd1_recover_paca
-END_FTR_SECTION_NESTED_IFSET(CPU_FTR_POWER9_DD1, 70)
  bl pnv_restore_hyp_resource_arch300
 FTR_SECTION_ELSE
  bl pnv_restore_hyp_resource_arch207
@@ -587,22 +547,12 @@ END_FTR_SECTION_IFCLR(CPU_FTR_POWER9_DD2_1)
  LOAD_REG_ADDRBASE(r5,pnv_first_deep_stop_state)
  ld r4,ADDROFF(pnv_first_deep_stop_state)(r5)
 
-BEGIN_FTR_SECTION_NESTED(71)
- /*
- * Assume that we are waking up from the state
- * same as the Requested Level (RL) in the PSSCR
- * which are Bits 60-63
- */
- ld r5,PACA_REQ_PSSCR(r13)
- rldicl  r5,r5,0,60
-FTR_SECTION_ELSE_NESTED(71)
  /*
  * 0-3 bits correspond to Power-Saving Level Status
  * which indicates the idle state we are waking up from
  */
  mfspr r5, SPRN_PSSCR
  rldicl  r5,r5,4,60
-ALT_FTR_SECTION_END_NESTED_IFSET(CPU_FTR_POWER9_DD1, 71)
  li r0, 0 /* clear requested_psscr to say we're awake */
  std r0, PACA_REQ_PSSCR(r13)
  cmpd cr4,r5,r4
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 83478a9..e73a80d 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1247,17 +1247,9 @@ struct task_struct *__switch_to(struct task_struct *prev,
  * mappings. If the new process has the foreign real address
  * mappings, we must issue a cp_abort to clear any state and
  * prevent snooping, corruption or a covert channel.
- *
- * DD1 allows paste into normal system memory so we do an
- * unpaired copy, rather than cp_abort, to clear the buffer,
- * since cp_abort is quite expensive.
  */
- if (current_thread_info()->task->thread.used_vas) {
+ if (current_thread_info()->task->thread.used_vas)
  asm volatile(PPC_CP_ABORT);
- } else if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
- asm volatile(PPC_COPY(%0, %1)
- : : "r"(dummy_copy_buffer), "r"(0));
- }
  }
 #endif /* CONFIG_PPC_BOOK3S_64 */
 
diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
index 5d9bafe..dd8980f 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
@@ -66,10 +66,7 @@ int kvmppc_mmu_radix_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
  bits = root & RPDS_MASK;
  root = root & RPDB_MASK;
 
- /* P9 DD1 interprets RTS (radix tree size) differently */
  offset = rts + 31;
- if (cpu_has_feature(CPU_FTR_POWER9_DD1))
- offset -= 3;
 
  /* current implementations only support 52-bit space */
  if (offset != 52)
@@ -167,17 +164,7 @@ unsigned long kvmppc_radix_update_pte(struct kvm *kvm, pte_t *ptep,
       unsigned long clr, unsigned long set,
       unsigned long addr, unsigned int shift)
 {
- unsigned long old = 0;
-
- if (!(clr & _PAGE_PRESENT) && cpu_has_feature(CPU_FTR_POWER9_DD1) &&
-    pte_present(*ptep)) {
- /* have to invalidate it first */
- old = __radix_pte_update(ptep, _PAGE_PRESENT, 0);
- kvmppc_radix_tlbie_page(kvm, addr, shift);
- set |= _PAGE_PRESENT;
- old &= _PAGE_PRESENT;
- }
- return __radix_pte_update(ptep, clr, set) | old;
+ return __radix_pte_update(ptep, clr, set);
 }
 
 void kvmppc_radix_set_pte_at(struct kvm *kvm, unsigned long addr,
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index dc9eb6b..51278f8 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -1662,14 +1662,6 @@ static int kvmppc_set_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
  r = set_vpa(vcpu, &vcpu->arch.dtl, addr, len);
  break;
  case KVM_REG_PPC_TB_OFFSET:
- /*
- * POWER9 DD1 has an erratum where writing TBU40 causes
- * the timebase to lose ticks.  So we don't let the
- * timebase offset be changed on P9 DD1.  (It is
- * initialized to zero.)
- */
- if (cpu_has_feature(CPU_FTR_POWER9_DD1))
- break;
  /* round up to multiple of 2^24 */
  vcpu->arch.vcore->tb_offset =
  ALIGN(set_reg_val(id, *val), 1UL << 24);
@@ -1987,8 +1979,6 @@ static struct kvm_vcpu *kvmppc_core_vcpu_create_hv(struct kvm *kvm,
  /*
  * Set the default HFSCR for the guest from the host value.
  * This value is only used on POWER9.
- * On POWER9 DD1, TM doesn't work, so we make sure to
- * prevent the guest from using it.
  * On POWER9, we want to virtualize the doorbell facility, so we
  * turn off the HFSCR bit, which causes those instructions to trap.
  */
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 293a659..1c35836 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -907,9 +907,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
  mtspr SPRN_PID, r7
  mtspr SPRN_WORT, r8
 BEGIN_FTR_SECTION
- PPC_INVALIDATE_ERAT
-END_FTR_SECTION_IFSET(CPU_FTR_POWER9_DD1)
-BEGIN_FTR_SECTION
  /* POWER8-only registers */
  ld r5, VCPU_TCSCR(r4)
  ld r6, VCPU_ACOP(r4)
@@ -1849,7 +1846,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
  ld r5, VCPU_KVM(r9)
  lbz r0, KVM_RADIX(r5)
  cmpwi cr2, r0, 0
- beq cr2, 4f
+ beq cr2, 2f
 
  /* Radix: Handle the case where the guest used an illegal PID */
  LOAD_REG_ADDR(r4, mmu_base_pid)
@@ -1881,11 +1878,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
  bdnz 1b
  ptesync
 
-2: /* Flush the ERAT on radix P9 DD1 guest exit */
-BEGIN_FTR_SECTION
- PPC_INVALIDATE_ERAT
-END_FTR_SECTION_IFSET(CPU_FTR_POWER9_DD1)
-4:
+2:
 #endif /* CONFIG_PPC_RADIX_MMU */
 
  /*
@@ -3432,11 +3425,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300)
  mtspr SPRN_CIABR, r0
  mtspr SPRN_DAWRX, r0
 
- /* Flush the ERAT on radix P9 DD1 guest exit */
-BEGIN_FTR_SECTION
- PPC_INVALIDATE_ERAT
-END_FTR_SECTION_IFSET(CPU_FTR_POWER9_DD1)
-
 BEGIN_MMU_FTR_SECTION
  b 4f
 END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_RADIX)
diff --git a/arch/powerpc/kvm/book3s_xive_template.c b/arch/powerpc/kvm/book3s_xive_template.c
index c7a5dea..3191961 100644
--- a/arch/powerpc/kvm/book3s_xive_template.c
+++ b/arch/powerpc/kvm/book3s_xive_template.c
@@ -22,18 +22,6 @@ static void GLUE(X_PFX,ack_pending)(struct kvmppc_xive_vcpu *xc)
  */
  eieio();
 
- /*
- * DD1 bug workaround: If PIPR is less favored than CPPR
- * ignore the interrupt or we might incorrectly lose an IPB
- * bit.
- */
- if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
- __be64 qw1 = __x_readq(__x_tima + TM_QW1_OS);
- u8 pipr = be64_to_cpu(qw1) & 0xff;
- if (pipr >= xc->hw_cppr)
- return;
- }
-
  /* Perform the acknowledge OS to register cycle. */
  ack = be16_to_cpu(__x_readw(__x_tima + TM_SPC_ACK_OS_REG));
 
@@ -86,8 +74,15 @@ static void GLUE(X_PFX,source_eoi)(u32 hw_irq, struct xive_irq_data *xd)
  /* If the XIVE supports the new "store EOI facility, use it */
  if (xd->flags & XIVE_IRQ_FLAG_STORE_EOI)
  __x_writeq(0, __x_eoi_page(xd) + XIVE_ESB_STORE_EOI);
- else if (hw_irq && xd->flags & XIVE_IRQ_FLAG_EOI_FW) {
+ else if (hw_irq && xd->flags & XIVE_IRQ_FLAG_EOI_FW)
  opal_int_eoi(hw_irq);
+ else if (xd->flags & XIVE_IRQ_FLAG_LSI) {
+ /*
+ * For LSIs the HW EOI cycle is used rather than PQ bits,
+ * as they are automatically re-triggred in HW when still
+ * pending.
+ */
+ __x_readq(__x_eoi_page(xd) + XIVE_ESB_LOAD_EOI);
  } else {
  uint64_t eoi_val;
 
@@ -99,20 +94,12 @@ static void GLUE(X_PFX,source_eoi)(u32 hw_irq, struct xive_irq_data *xd)
  *
  * This allows us to then do a re-trigger if Q was set
  * rather than synthetizing an interrupt in software
- *
- * For LSIs, using the HW EOI cycle works around a problem
- * on P9 DD1 PHBs where the other ESB accesses don't work
- * properly.
  */
- if (xd->flags & XIVE_IRQ_FLAG_LSI)
- __x_readq(__x_eoi_page(xd) + XIVE_ESB_LOAD_EOI);
- else {
- eoi_val = GLUE(X_PFX,esb_load)(xd, XIVE_ESB_SET_PQ_00);
-
- /* Re-trigger if needed */
- if ((eoi_val & 1) && __x_trig_page(xd))
- __x_writeq(0, __x_trig_page(xd));
- }
+ eoi_val = GLUE(X_PFX,esb_load)(xd, XIVE_ESB_SET_PQ_00);
+
+ /* Re-trigger if needed */
+ if ((eoi_val & 1) && __x_trig_page(xd))
+ __x_writeq(0, __x_trig_page(xd));
  }
 }
 
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index db84680..06574b4 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -802,31 +802,6 @@ int hash__remove_section_mapping(unsigned long start, unsigned long end)
 }
 #endif /* CONFIG_MEMORY_HOTPLUG */
 
-static void update_hid_for_hash(void)
-{
- unsigned long hid0;
- unsigned long rb = 3UL << PPC_BITLSHIFT(53); /* IS = 3 */
-
- asm volatile("ptesync": : :"memory");
- /* prs = 0, ric = 2, rs = 0, r = 1 is = 3 */
- asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
-     : : "r"(rb), "i"(0), "i"(0), "i"(2), "r"(0) : "memory");
- asm volatile("eieio; tlbsync; ptesync; isync; slbia": : :"memory");
- trace_tlbie(0, 0, rb, 0, 2, 0, 0);
-
- /*
- * now switch the HID
- */
- hid0  = mfspr(SPRN_HID0);
- hid0 &= ~HID0_POWER9_RADIX;
- mtspr(SPRN_HID0, hid0);
- asm volatile("isync": : :"memory");
-
- /* Wait for it to happen */
- while ((mfspr(SPRN_HID0) & HID0_POWER9_RADIX))
- cpu_relax();
-}
-
 static void __init hash_init_partition_table(phys_addr_t hash_table,
      unsigned long htab_size)
 {
@@ -839,8 +814,6 @@ static void __init hash_init_partition_table(phys_addr_t hash_table,
  htab_size =  __ilog2(htab_size) - 18;
  mmu_partition_table_set_entry(0, hash_table | htab_size, 0);
  pr_info("Partition table %p\n", partition_tb);
- if (cpu_has_feature(CPU_FTR_POWER9_DD1))
- update_hid_for_hash();
 }
 
 static void __init htab_initialize(void)
@@ -1063,9 +1036,6 @@ void hash__early_init_mmu_secondary(void)
  /* Initialize hash table for that CPU */
  if (!firmware_has_feature(FW_FEATURE_LPAR)) {
 
- if (cpu_has_feature(CPU_FTR_POWER9_DD1))
- update_hid_for_hash();
-
  if (!cpu_has_feature(CPU_FTR_ARCH_300))
  mtspr(SPRN_SDR1, _SDR1);
  else
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 79e1378..6f7b831 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -609,15 +609,12 @@ static int __init add_huge_page_size(unsigned long long size)
  * firmware we only add hugetlb support for page sizes that can be
  * supported by linux page table layout.
  * For now we have
- * Radix: 2M
+ * Radix: 2M and 1G
  * Hash: 16M and 16G
  */
  if (radix_enabled()) {
- if (mmu_psize != MMU_PAGE_2M) {
- if (cpu_has_feature(CPU_FTR_POWER9_DD1) ||
-    (mmu_psize != MMU_PAGE_1G))
- return -EINVAL;
- }
+ if (mmu_psize != MMU_PAGE_2M && mmu_psize != MMU_PAGE_1G)
+ return -EINVAL;
  } else {
  if (mmu_psize != MMU_PAGE_16M && mmu_psize != MMU_PAGE_16G)
  return -EINVAL;
diff --git a/arch/powerpc/mm/mmu_context_book3s64.c b/arch/powerpc/mm/mmu_context_book3s64.c
index 5066276..208f687 100644
--- a/arch/powerpc/mm/mmu_context_book3s64.c
+++ b/arch/powerpc/mm/mmu_context_book3s64.c
@@ -250,15 +250,7 @@ void arch_exit_mmap(struct mm_struct *mm)
 #ifdef CONFIG_PPC_RADIX_MMU
 void radix__switch_mmu_context(struct mm_struct *prev, struct mm_struct *next)
 {
-
- if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
- isync();
- mtspr(SPRN_PID, next->context.id);
- isync();
- asm volatile(PPC_INVALIDATE_ERAT : : :"memory");
- } else {
- mtspr(SPRN_PID, next->context.id);
- isync();
- }
+ mtspr(SPRN_PID, next->context.id);
+ isync();
 }
 #endif
diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
index a778560..704362d 100644
--- a/arch/powerpc/mm/pgtable-radix.c
+++ b/arch/powerpc/mm/pgtable-radix.c
@@ -171,16 +171,6 @@ void radix__mark_rodata_ro(void)
 {
  unsigned long start, end;
 
- /*
- * mark_rodata_ro() will mark itself as !writable at some point.
- * Due to DD1 workaround in radix__pte_update(), we'll end up with
- * an invalid pte and the system will crash quite severly.
- */
- if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
- pr_warn("Warning: Unable to mark rodata read only on P9 DD1\n");
- return;
- }
-
  start = (unsigned long)_stext;
  end = (unsigned long)__init_begin;
 
@@ -470,35 +460,6 @@ void __init radix__early_init_devtree(void)
  return;
 }
 
-static void update_hid_for_radix(void)
-{
- unsigned long hid0;
- unsigned long rb = 3UL << PPC_BITLSHIFT(53); /* IS = 3 */
-
- asm volatile("ptesync": : :"memory");
- /* prs = 0, ric = 2, rs = 0, r = 1 is = 3 */
- asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
-     : : "r"(rb), "i"(1), "i"(0), "i"(2), "r"(0) : "memory");
- /* prs = 1, ric = 2, rs = 0, r = 1 is = 3 */
- asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
-     : : "r"(rb), "i"(1), "i"(1), "i"(2), "r"(0) : "memory");
- asm volatile("eieio; tlbsync; ptesync; isync; slbia": : :"memory");
- trace_tlbie(0, 0, rb, 0, 2, 0, 1);
- trace_tlbie(0, 0, rb, 0, 2, 1, 1);
-
- /*
- * now switch the HID
- */
- hid0  = mfspr(SPRN_HID0);
- hid0 |= HID0_POWER9_RADIX;
- mtspr(SPRN_HID0, hid0);
- asm volatile("isync": : :"memory");
-
- /* Wait for it to happen */
- while (!(mfspr(SPRN_HID0) & HID0_POWER9_RADIX))
- cpu_relax();
-}
-
 static void radix_init_amor(void)
 {
  /*
@@ -513,22 +474,12 @@ static void radix_init_amor(void)
 
 static void radix_init_iamr(void)
 {
- unsigned long iamr;
-
- /*
- * The IAMR should set to 0 on DD1.
- */
- if (cpu_has_feature(CPU_FTR_POWER9_DD1))
- iamr = 0;
- else
- iamr = (1ul << 62);
-
  /*
  * Radix always uses key0 of the IAMR to determine if an access is
  * allowed. We set bit 0 (IBM bit 1) of key0, to prevent instruction
  * fetch.
  */
- mtspr(SPRN_IAMR, iamr);
+ mtspr(SPRN_IAMR, (1ul << 62));
 }
 
 void __init radix__early_init_mmu(void)
@@ -583,8 +534,6 @@ void __init radix__early_init_mmu(void)
 
  if (!firmware_has_feature(FW_FEATURE_LPAR)) {
  radix_init_native();
- if (cpu_has_feature(CPU_FTR_POWER9_DD1))
- update_hid_for_radix();
  lpcr = mfspr(SPRN_LPCR);
  mtspr(SPRN_LPCR, lpcr | LPCR_UPRT | LPCR_HR);
  radix_init_partition_table();
@@ -608,10 +557,6 @@ void radix__early_init_mmu_secondary(void)
  * update partition table control register and UPRT
  */
  if (!firmware_has_feature(FW_FEATURE_LPAR)) {
-
- if (cpu_has_feature(CPU_FTR_POWER9_DD1))
- update_hid_for_radix();
-
  lpcr = mfspr(SPRN_LPCR);
  mtspr(SPRN_LPCR, lpcr | LPCR_UPRT | LPCR_HR);
 
@@ -1029,8 +974,7 @@ void radix__ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep,
  * To avoid NMMU hang while relaxing access, we need mark
  * the pte invalid in between.
  */
- if (cpu_has_feature(CPU_FTR_POWER9_DD1) ||
-    atomic_read(&mm->context.copros) > 0) {
+ if (atomic_read(&mm->context.copros) > 0) {
  unsigned long old_pte, new_pte;
 
  old_pte = __radix_pte_update(ptep, ~0, 0);
diff --git a/arch/powerpc/mm/tlb-radix.c b/arch/powerpc/mm/tlb-radix.c
index c07c2f0..b0cad4f 100644
--- a/arch/powerpc/mm/tlb-radix.c
+++ b/arch/powerpc/mm/tlb-radix.c
@@ -658,24 +658,6 @@ void radix__flush_tlb_all(void)
  asm volatile("eieio; tlbsync; ptesync": : :"memory");
 }
 
-void radix__flush_tlb_pte_p9_dd1(unsigned long old_pte, struct mm_struct *mm,
- unsigned long address)
-{
- /*
- * We track page size in pte only for DD1, So we can
- * call this only on DD1.
- */
- if (!cpu_has_feature(CPU_FTR_POWER9_DD1)) {
- VM_WARN_ON(1);
- return;
- }
-
- if (old_pte & R_PAGE_LARGE)
- radix__flush_tlb_page_psize(mm, address, MMU_PAGE_2M);
- else
- radix__flush_tlb_page_psize(mm, address, mmu_virtual_psize);
-}
-
 #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
 extern void radix_kvm_prefetch_workaround(struct mm_struct *mm)
 {
diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
index b7a6044..8ce6673 100644
--- a/arch/powerpc/perf/core-book3s.c
+++ b/arch/powerpc/perf/core-book3s.c
@@ -128,10 +128,6 @@ static inline void power_pmu_bhrb_disable(struct perf_event *event) {}
 static void power_pmu_sched_task(struct perf_event_context *ctx, bool sched_in) {}
 static inline void power_pmu_bhrb_read(struct cpu_hw_events *cpuhw) {}
 static void pmao_restore_workaround(bool ebb) { }
-static bool use_ic(u64 event)
-{
- return false;
-}
 #endif /* CONFIG_PPC32 */
 
 static bool regs_use_siar(struct pt_regs *regs)
@@ -710,14 +706,6 @@ static void pmao_restore_workaround(bool ebb)
  mtspr(SPRN_PMC6, pmcs[5]);
 }
 
-static bool use_ic(u64 event)
-{
- if (cpu_has_feature(CPU_FTR_POWER9_DD1) &&
- (event == 0x200f2 || event == 0x300f2))
- return true;
-
- return false;
-}
 #endif /* CONFIG_PPC64 */
 
 static void perf_event_interrupt(struct pt_regs *regs);
@@ -1042,7 +1030,6 @@ static u64 check_and_compute_delta(u64 prev, u64 val)
 static void power_pmu_read(struct perf_event *event)
 {
  s64 val, delta, prev;
- struct cpu_hw_events *cpuhw = this_cpu_ptr(&cpu_hw_events);
 
  if (event->hw.state & PERF_HES_STOPPED)
  return;
@@ -1052,13 +1039,6 @@ static void power_pmu_read(struct perf_event *event)
 
  if (is_ebb_event(event)) {
  val = read_pmc(event->hw.idx);
- if (use_ic(event->attr.config)) {
- val = mfspr(SPRN_IC);
- if (val > cpuhw->ic_init)
- val = val - cpuhw->ic_init;
- else
- val = val + (0 - cpuhw->ic_init);
- }
  local64_set(&event->hw.prev_count, val);
  return;
  }
@@ -1072,13 +1052,6 @@ static void power_pmu_read(struct perf_event *event)
  prev = local64_read(&event->hw.prev_count);
  barrier();
  val = read_pmc(event->hw.idx);
- if (use_ic(event->attr.config)) {
- val = mfspr(SPRN_IC);
- if (val > cpuhw->ic_init)
- val = val - cpuhw->ic_init;
- else
- val = val + (0 - cpuhw->ic_init);
- }
  delta = check_and_compute_delta(prev, val);
  if (!delta)
  return;
@@ -1531,13 +1504,6 @@ static int power_pmu_add(struct perf_event *event, int ef_flags)
  event->attr.branch_sample_type);
  }
 
- /*
- * Workaround for POWER9 DD1 to use the Instruction Counter
- * register value for instruction counting
- */
- if (use_ic(event->attr.config))
- cpuhw->ic_init = mfspr(SPRN_IC);
-
  perf_pmu_enable(event->pmu);
  local_irq_restore(flags);
  return ret;
diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c
index 2efee3f..177de81 100644
--- a/arch/powerpc/perf/isa207-common.c
+++ b/arch/powerpc/perf/isa207-common.c
@@ -59,7 +59,7 @@ static bool is_event_valid(u64 event)
 {
  u64 valid_mask = EVENT_VALID_MASK;
 
- if (cpu_has_feature(CPU_FTR_ARCH_300) && !cpu_has_feature(CPU_FTR_POWER9_DD1))
+ if (cpu_has_feature(CPU_FTR_ARCH_300))
  valid_mask = p9_EVENT_VALID_MASK;
 
  return !(event & ~valid_mask);
@@ -86,8 +86,6 @@ static void mmcra_sdar_mode(u64 event, unsigned long *mmcra)
  * Incase of Power9:
  * Marked event: MMCRA[SDAR_MODE] will be set to 0b00 ('No Updates'),
  *               or if group already have any marked events.
- * Non-Marked events (for DD1):
- * MMCRA[SDAR_MODE] will be set to 0b01
  * For rest
  * MMCRA[SDAR_MODE] will be set from event code.
  *      If sdar_mode from event is zero, default to 0b01. Hardware
@@ -96,7 +94,7 @@ static void mmcra_sdar_mode(u64 event, unsigned long *mmcra)
  if (cpu_has_feature(CPU_FTR_ARCH_300)) {
  if (is_event_marked(event) || (*mmcra & MMCRA_SAMPLE_ENABLE))
  *mmcra &= MMCRA_SDAR_MODE_NO_UPDATES;
- else if (!cpu_has_feature(CPU_FTR_POWER9_DD1) && p9_SDAR_MODE(event))
+ else if (p9_SDAR_MODE(event))
  *mmcra |=  p9_SDAR_MODE(event) << MMCRA_SDAR_MODE_SHIFT;
  else
  *mmcra |= MMCRA_SDAR_MODE_DCACHE;
@@ -106,7 +104,7 @@ static void mmcra_sdar_mode(u64 event, unsigned long *mmcra)
 
 static u64 thresh_cmp_val(u64 value)
 {
- if (cpu_has_feature(CPU_FTR_ARCH_300) && !cpu_has_feature(CPU_FTR_POWER9_DD1))
+ if (cpu_has_feature(CPU_FTR_ARCH_300))
  return value << p9_MMCRA_THR_CMP_SHIFT;
 
  return value << MMCRA_THR_CMP_SHIFT;
@@ -114,7 +112,7 @@ static u64 thresh_cmp_val(u64 value)
 
 static unsigned long combine_from_event(u64 event)
 {
- if (cpu_has_feature(CPU_FTR_ARCH_300) && !cpu_has_feature(CPU_FTR_POWER9_DD1))
+ if (cpu_has_feature(CPU_FTR_ARCH_300))
  return p9_EVENT_COMBINE(event);
 
  return EVENT_COMBINE(event);
@@ -122,7 +120,7 @@ static unsigned long combine_from_event(u64 event)
 
 static unsigned long combine_shift(unsigned long pmc)
 {
- if (cpu_has_feature(CPU_FTR_ARCH_300) && !cpu_has_feature(CPU_FTR_POWER9_DD1))
+ if (cpu_has_feature(CPU_FTR_ARCH_300))
  return p9_MMCR1_COMBINE_SHIFT(pmc);
 
  return MMCR1_COMBINE_SHIFT(pmc);
diff --git a/arch/powerpc/perf/isa207-common.h b/arch/powerpc/perf/isa207-common.h
index 6c737d6..479dec2 100644
--- a/arch/powerpc/perf/isa207-common.h
+++ b/arch/powerpc/perf/isa207-common.h
@@ -222,11 +222,6 @@
  CNST_PMC_VAL(1) | CNST_PMC_VAL(2) | CNST_PMC_VAL(3) | \
  CNST_PMC_VAL(4) | CNST_PMC_VAL(5) | CNST_PMC_VAL(6) | CNST_NC_VAL
 
-/*
- * Lets restrict use of PMC5 for instruction counting.
- */
-#define P9_DD1_TEST_ADDER (ISA207_TEST_ADDER | CNST_PMC_VAL(5))
-
 /* Bits in MMCR1 for PowerISA v2.07 */
 #define MMCR1_UNIT_SHIFT(pmc) (60 - (4 * ((pmc) - 1)))
 #define MMCR1_COMBINE_SHIFT(pmc) (35 - ((pmc) - 1))
diff --git a/arch/powerpc/perf/power9-pmu.c b/arch/powerpc/perf/power9-pmu.c
index 24b5b5b..3d055c8 100644
--- a/arch/powerpc/perf/power9-pmu.c
+++ b/arch/powerpc/perf/power9-pmu.c
@@ -183,12 +183,6 @@ static struct attribute_group power9_pmu_events_group = {
  .attrs = power9_events_attr,
 };
 
-static const struct attribute_group *power9_isa207_pmu_attr_groups[] = {
- &isa207_pmu_format_group,
- &power9_pmu_events_group,
- NULL,
-};
-
 PMU_FORMAT_ATTR(event, "config:0-51");
 PMU_FORMAT_ATTR(pmcxsel, "config:0-7");
 PMU_FORMAT_ATTR(mark, "config:8");
@@ -231,17 +225,6 @@ static const struct attribute_group *power9_pmu_attr_groups[] = {
  NULL,
 };
 
-static int power9_generic_events_dd1[] = {
- [PERF_COUNT_HW_CPU_CYCLES] = PM_CYC,
- [PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = PM_ICT_NOSLOT_CYC,
- [PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = PM_CMPLU_STALL,
- [PERF_COUNT_HW_INSTRUCTIONS] = PM_INST_DISP,
- [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = PM_BR_CMPL_ALT,
- [PERF_COUNT_HW_BRANCH_MISSES] = PM_BR_MPRED_CMPL,
- [PERF_COUNT_HW_CACHE_REFERENCES] = PM_LD_REF_L1,
- [PERF_COUNT_HW_CACHE_MISSES] = PM_LD_MISS_L1_FIN,
-};
-
 static int power9_generic_events[] = {
  [PERF_COUNT_HW_CPU_CYCLES] = PM_CYC,
  [PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = PM_ICT_NOSLOT_CYC,
@@ -403,25 +386,6 @@ static int power9_cache_events[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = {
 
 #undef C
 
-static struct power_pmu power9_isa207_pmu = {
- .name = "POWER9",
- .n_counter = MAX_PMU_COUNTERS,
- .add_fields = ISA207_ADD_FIELDS,
- .test_adder = P9_DD1_TEST_ADDER,
- .compute_mmcr = isa207_compute_mmcr,
- .config_bhrb = power9_config_bhrb,
- .bhrb_filter_map = power9_bhrb_filter_map,
- .get_constraint = isa207_get_constraint,
- .get_alternatives = power9_get_alternatives,
- .disable_pmc = isa207_disable_pmc,
- .flags = PPMU_NO_SIAR | PPMU_ARCH_207S,
- .n_generic = ARRAY_SIZE(power9_generic_events_dd1),
- .generic_events = power9_generic_events_dd1,
- .cache_events = &power9_cache_events,
- .attr_groups = power9_isa207_pmu_attr_groups,
- .bhrb_nr = 32,
-};
-
 static struct power_pmu power9_pmu = {
  .name = "POWER9",
  .n_counter = MAX_PMU_COUNTERS,
@@ -452,23 +416,7 @@ static int __init init_power9_pmu(void)
     strcmp(cur_cpu_spec->oprofile_cpu_type, "ppc64/power9"))
  return -ENODEV;
 
- if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
- /*
- * Since PM_INST_CMPL may not provide right counts in all
- * sampling scenarios in power9 DD1, instead use PM_INST_DISP.
- */
- EVENT_VAR(PM_INST_CMPL, _g).id = PM_INST_DISP;
- /*
- * Power9 DD1 should use PM_BR_CMPL_ALT event code for
- * "branches" to provide correct counter value.
- */
- EVENT_VAR(PM_BR_CMPL, _g).id = PM_BR_CMPL_ALT;
- EVENT_VAR(PM_BR_CMPL, _c).id = PM_BR_CMPL_ALT;
- rc = register_power_pmu(&power9_isa207_pmu);
- } else {
- rc = register_power_pmu(&power9_pmu);
- }
-
+ rc = register_power_pmu(&power9_pmu);
  if (rc)
  return rc;
 
diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
index 3776a58..113d647 100644
--- a/arch/powerpc/platforms/powernv/idle.c
+++ b/arch/powerpc/platforms/powernv/idle.c
@@ -177,11 +177,6 @@ static void pnv_alloc_idle_core_states(void)
  paca[cpu].core_idle_state_ptr = core_idle_state;
  paca[cpu].thread_idle_state = PNV_THREAD_RUNNING;
  paca[cpu].thread_mask = 1 << j;
- if (!cpu_has_feature(CPU_FTR_POWER9_DD1))
- continue;
- paca[cpu].thread_sibling_pacas =
- kmalloc_node(paca_ptr_array_size,
-     GFP_KERNEL, node);
  }
  }
 
@@ -813,28 +808,6 @@ static int __init pnv_init_idle_states(void)
 
  pnv_alloc_idle_core_states();
 
- /*
- * For each CPU, record its PACA address in each of it's
- * sibling thread's PACA at the slot corresponding to this
- * CPU's index in the core.
- */
- if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
- int cpu;
-
- pr_info("powernv: idle: Saving PACA pointers of all CPUs in their thread sibling PACA\n");
- for_each_present_cpu(cpu) {
- int base_cpu = cpu_first_thread_sibling(cpu);
- int idx = cpu_thread_in_core(cpu);
- int i;
-
- for (i = 0; i < threads_per_core; i++) {
- int j = base_cpu + i;
-
- paca[j].thread_sibling_pacas[idx] = &paca[cpu];
- }
- }
- }
-
  if (supported_cpuidle_states & OPAL_PM_NAP_ENABLED)
  ppc_md.power_save = power7_idle;
 
diff --git a/arch/powerpc/platforms/powernv/smp.c b/arch/powerpc/platforms/powernv/smp.c
index 9664c84..f7dec55 100644
--- a/arch/powerpc/platforms/powernv/smp.c
+++ b/arch/powerpc/platforms/powernv/smp.c
@@ -283,23 +283,6 @@ static void pnv_cause_ipi(int cpu)
  ic_cause_ipi(cpu);
 }
 
-static void pnv_p9_dd1_cause_ipi(int cpu)
-{
- int this_cpu = get_cpu();
-
- /*
- * POWER9 DD1 has a global addressed msgsnd, but for now we restrict
- * IPIs to same core, because it requires additional synchronization
- * for inter-core doorbells which we do not implement.
- */
- if (cpumask_test_cpu(cpu, cpu_sibling_mask(this_cpu)))
- doorbell_global_ipi(cpu);
- else
- ic_cause_ipi(cpu);
-
- put_cpu();
-}
-
 static void __init pnv_smp_probe(void)
 {
  if (xive_enabled())
@@ -311,14 +294,10 @@ static void __init pnv_smp_probe(void)
  ic_cause_ipi = smp_ops->cause_ipi;
  WARN_ON(!ic_cause_ipi);
 
- if (cpu_has_feature(CPU_FTR_ARCH_300)) {
- if (cpu_has_feature(CPU_FTR_POWER9_DD1))
- smp_ops->cause_ipi = pnv_p9_dd1_cause_ipi;
- else
- smp_ops->cause_ipi = doorbell_global_ipi;
- } else {
+ if (cpu_has_feature(CPU_FTR_ARCH_300))
+ smp_ops->cause_ipi = doorbell_global_ipi;
+ else
  smp_ops->cause_ipi = pnv_cause_ipi;
- }
  }
 }
 
diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c
index a3b8d7d..82cc999 100644
--- a/arch/powerpc/sysdev/xive/common.c
+++ b/arch/powerpc/sysdev/xive/common.c
@@ -319,7 +319,7 @@ void xive_do_source_eoi(u32 hw_irq, struct xive_irq_data *xd)
  * The FW told us to call it. This happens for some
  * interrupt sources that need additional HW whacking
  * beyond the ESB manipulation. For example LPC interrupts
- * on P9 DD1.0 need a latch to be clared in the LPC bridge
+ * on P9 DD1.0 needed a latch to be clared in the LPC bridge
  * itself. The Firmware will take care of it.
  */
  if (WARN_ON_ONCE(!xive_ops->eoi))
@@ -337,9 +337,9 @@ void xive_do_source_eoi(u32 hw_irq, struct xive_irq_data *xd)
  * This allows us to then do a re-trigger if Q was set
  * rather than synthesizing an interrupt in software
  *
- * For LSIs, using the HW EOI cycle works around a problem
- * on P9 DD1 PHBs where the other ESB accesses don't work
- * properly.
+ * For LSIs the HW EOI cycle is used rather than PQ bits,
+ * as they are automatically re-triggred in HW when still
+ * pending.
  */
  if (xd->flags & XIVE_IRQ_FLAG_LSI)
  xive_esb_read(xd, XIVE_ESB_LOAD_EOI);
diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
index 8a57ff1..c6156b6 100644
--- a/drivers/misc/cxl/cxl.h
+++ b/drivers/misc/cxl/cxl.h
@@ -865,14 +865,6 @@ static inline bool cxl_is_power9(void)
  return false;
 }
 
-static inline bool cxl_is_power9_dd1(void)
-{
- if ((pvr_version_is(PVR_POWER9)) &&
-    cpu_has_feature(CPU_FTR_POWER9_DD1))
- return true;
- return false;
-}
-
 ssize_t cxl_pci_afu_read_err_buffer(struct cxl_afu *afu, char *buf,
  loff_t off, size_t count);
 
diff --git a/drivers/misc/cxl/cxllib.c b/drivers/misc/cxl/cxllib.c
index 0bc7c31..5a3f912 100644
--- a/drivers/misc/cxl/cxllib.c
+++ b/drivers/misc/cxl/cxllib.c
@@ -102,10 +102,6 @@ int cxllib_get_xsl_config(struct pci_dev *dev, struct cxllib_xsl_config *cfg)
  rc = cxl_get_xsl9_dsnctl(dev, capp_unit_id, &cfg->dsnctl);
  if (rc)
  return rc;
- if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
- /* workaround for DD1 - nbwind = capiind */
- cfg->dsnctl |= ((u64)0x02 << (63-47));
- }
 
  cfg->version  = CXL_XSL_CONFIG_CURRENT_VERSION;
  cfg->log_bar_size = CXL_CAPI_WINDOW_LOG_SIZE;
diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
index 429d6de..2af0d4c 100644
--- a/drivers/misc/cxl/pci.c
+++ b/drivers/misc/cxl/pci.c
@@ -465,23 +465,21 @@ int cxl_get_xsl9_dsnctl(struct pci_dev *dev, u64 capp_unit_id, u64 *reg)
  /* nMMU_ID Defaults to: b’000001001’*/
  xsl_dsnctl |= ((u64)0x09 << (63-28));
 
- if (!(cxl_is_power9_dd1())) {
- /*
- * Used to identify CAPI packets which should be sorted into
- * the Non-Blocking queues by the PHB. This field should match
- * the PHB PBL_NBW_CMPM register
- * nbwind=0x03, bits [57:58], must include capi indicator.
- * Not supported on P9 DD1.
- */
- xsl_dsnctl |= (nbwind << (63-55));
+ /*
+ * Used to identify CAPI packets which should be sorted into
+ * the Non-Blocking queues by the PHB. This field should match
+ * the PHB PBL_NBW_CMPM register
+ * nbwind=0x03, bits [57:58], must include capi indicator.
+ * Not supported on P9 DD1.
+ */
+ xsl_dsnctl |= (nbwind << (63-55));
 
- /*
- * Upper 16b address bits of ASB_Notify messages sent to the
- * system. Need to match the PHB’s ASN Compare/Mask Register.
- * Not supported on P9 DD1.
- */
- xsl_dsnctl |= asnind;
- }
+ /*
+ * Upper 16b address bits of ASB_Notify messages sent to the
+ * system. Need to match the PHB’s ASN Compare/Mask Register.
+ * Not supported on P9 DD1.
+ */
+ xsl_dsnctl |= asnind;
 
  *reg = xsl_dsnctl;
  return 0;
@@ -539,15 +537,8 @@ static int init_implementation_adapter_regs_psl9(struct cxl *adapter,
  /* Snoop machines */
  cxl_p1_write(adapter, CXL_PSL9_APCDEDALLOC, 0x800F000200000000ULL);
 
- if (cxl_is_power9_dd1()) {
- /* Disabling deadlock counter CAR */
- cxl_p1_write(adapter, CXL_PSL9_GP_CT, 0x0020000000000001ULL);
- /* Enable NORST */
- cxl_p1_write(adapter, CXL_PSL9_DEBUG, 0x8000000000000000ULL);
- } else {
- /* Enable NORST and DD2 features */
- cxl_p1_write(adapter, CXL_PSL9_DEBUG, 0xC000000000000000ULL);
- }
+ /* Enable NORST and DD2 features */
+ cxl_p1_write(adapter, CXL_PSL9_DEBUG, 0xC000000000000000ULL);
 
  /*
  * Check if PSL has data-cache. We need to flush adapter datacache
--
2.7.4


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[SRU][Bionic][Cosmic][PATCH 2/3] powerpc/mm/books3s: Add new pte bit to mark pte temporarily invalid.

Joseph Salisbury-3
In reply to this post by Joseph Salisbury-3
From: "Aneesh Kumar K.V" <[hidden email]>

BugLink: https://bugs.launchpad.net/bugs/1792195

When splitting a huge pmd pte, we need to mark the pmd entry invalid. We
can do that by clearing _PAGE_PRESENT bit. But then that will be taken as a
swap pte. In order to differentiate between the two use a software pte bit
when invalidating.

For regular pte, due to bd5050e38aec ("powerpc/mm/radix: Change pte relax
sequence to handle nest MMU hang") we need to mark the pte entry invalid when
relaxing access permission. Instead of marking pte_none which can result in
different page table walk routines possibly skipping this pte entry, invalidate
it but still keep it marked present.

Signed-off-by: Aneesh Kumar K.V <[hidden email]>
Signed-off-by: Michael Ellerman <[hidden email]>
(cherry picked from commit bd0dbb73e01306a1060e56f81e5fe287be936477)
Signed-off-by: Joseph Salisbury <[hidden email]>
---
 arch/powerpc/include/asm/book3s/64/pgtable.h | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 674990c..8dd7fd6 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -42,6 +42,16 @@
 
 #define _PAGE_PTE 0x4000000000000000UL /* distinguishes PTEs from pointers */
 #define _PAGE_PRESENT 0x8000000000000000UL /* pte contains a translation */
+/*
+ * We need to mark a pmd pte invalid while splitting. We can do that by clearing
+ * the _PAGE_PRESENT bit. But then that will be taken as a swap pte. In order to
+ * differentiate between two use a SW field when invalidating.
+ *
+ * We do that temporary invalidate for regular pte entry in ptep_set_access_flags
+ *
+ * This is used only when _PAGE_PRESENT is cleared.
+ */
+#define _PAGE_INVALID _RPAGE_SW0
 
 /*
  * Top and bottom bits of RPN which can be used by hash
@@ -543,7 +553,13 @@ static inline pte_t pte_clear_savedwrite(pte_t pte)
 
 static inline int pte_present(pte_t pte)
 {
- return !!(pte_raw(pte) & cpu_to_be64(_PAGE_PRESENT));
+ /*
+ * A pte is considerent present if _PAGE_PRESENT is set.
+ * We also need to consider the pte present which is marked
+ * invalid during ptep_set_access_flags. Hence we look for _PAGE_INVALID
+ * if we find _PAGE_PRESENT cleared.
+ */
+ return !!(pte_raw(pte) & cpu_to_be64(_PAGE_PRESENT | _PAGE_INVALID));
 }
 /*
  * Conversion functions: convert a page and protection to a page entry,
--
2.7.4


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

[SRU][Bionic][Cosmic][PATCH 3/3] powerpc/mm/radix: Only need the Nest MMU workaround for R -> RW transition

Joseph Salisbury-3
In reply to this post by Joseph Salisbury-3
From: "Aneesh Kumar K.V" <[hidden email]>

BugLink: https://bugs.launchpad.net/bugs/1792195

The Nest MMU workaround is only needed for RW upgrades. Avoid doing
that for other PTE updates.

We also avoid clearing the PTE while marking it invalid. This is
because other page table walkers will find this PTE none and can
result in unexpected behaviour due to that. Instead we clear
_PAGE_PRESENT and set the software PTE bit _PAGE_INVALID.
pte_present() is already updated to check for both bits. This makes
sure page table walkers will find the PTE present and things like
pte_pfn(pte) returns the right value.

Based on an original patch from Benjamin Herrenschmidt <[hidden email]>

Signed-off-by: Aneesh Kumar K.V <[hidden email]>
Reviewed-by: Nicholas Piggin <[hidden email]>
Signed-off-by: Michael Ellerman <[hidden email]>
(cherry picked from commit f08d08f3db55452d31ba4a37c702da6245876b96)
Signed-off-by: Joseph Salisbury <[hidden email]>
---
 arch/powerpc/mm/pgtable-radix.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
index 704362d..f767f7db 100644
--- a/arch/powerpc/mm/pgtable-radix.c
+++ b/arch/powerpc/mm/pgtable-radix.c
@@ -970,20 +970,22 @@ void radix__ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep,
  struct mm_struct *mm = vma->vm_mm;
  unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_ACCESSED |
       _PAGE_RW | _PAGE_EXEC);
+
+ unsigned long change = pte_val(entry) ^ pte_val(*ptep);
  /*
  * To avoid NMMU hang while relaxing access, we need mark
  * the pte invalid in between.
  */
- if (atomic_read(&mm->context.copros) > 0) {
+ if ((change & _PAGE_RW) && atomic_read(&mm->context.copros) > 0) {
  unsigned long old_pte, new_pte;
 
- old_pte = __radix_pte_update(ptep, ~0, 0);
+ old_pte = __radix_pte_update(ptep, _PAGE_PRESENT, _PAGE_INVALID);
  /*
  * new value of pte
  */
  new_pte = old_pte | set;
  radix__flush_tlb_page_psize(mm, address, psize);
- __radix_pte_update(ptep, 0, new_pte);
+ __radix_pte_update(ptep, _PAGE_INVALID, new_pte);
  } else {
  __radix_pte_update(ptep, 0, set);
  radix__flush_tlb_page_psize(mm, address, psize);
--
2.7.4


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

Re: [SRU][Bionic][Cosmic][PATCH 1/3] powerpc/64s: Remove POWER9 DD1 support

Stefan Bader-2
In reply to this post by Joseph Salisbury-3
On 11.10.2018 18:22, Joseph Salisbury wrote:
> From: Nicholas Piggin <[hidden email]>
>
> BugLink: https://bugs.launchpad.net/bugs/1792195
>
> POWER9 DD1 was never a product. It is no longer supported by upstream
> firmware, and it is not effectively supported in Linux due to lack of
> testing.

I am not really happy to see such a large portion of code getting ripped out
*after* release. One never knows whether there really was nothing that did not
still make use of some parts...

Is this part really strictly required for Bionic?

-Stefan

>
> Signed-off-by: Nicholas Piggin <[hidden email]>
> Reviewed-by: Michael Ellerman <[hidden email]>
> [mpe: Remove arch_make_huge_pte() entirely]
> Signed-off-by: Michael Ellerman <[hidden email]>
> (backported from commit 2bf1071a8d50928a4ae366bb3108833166c2b70c)
> Signed-off-by: Michael Ranweiler <[hidden email]>
> Signed-off-by: Joseph Salisbury <[hidden email]>
> ---
>  arch/powerpc/include/asm/book3s/64/hugetlb.h       | 20 --------
>  arch/powerpc/include/asm/book3s/64/pgtable.h       |  5 +-
>  arch/powerpc/include/asm/book3s/64/radix.h         | 35 ++-----------
>  .../powerpc/include/asm/book3s/64/tlbflush-radix.h |  2 -
>  arch/powerpc/include/asm/cputable.h                |  6 +--
>  arch/powerpc/include/asm/paca.h                    |  5 --
>  arch/powerpc/kernel/asm-offsets.c                  |  1 -
>  arch/powerpc/kernel/cputable.c                     | 20 --------
>  arch/powerpc/kernel/dt_cpu_ftrs.c                  | 13 +++--
>  arch/powerpc/kernel/exceptions-64s.S               |  4 +-
>  arch/powerpc/kernel/idle_book3s.S                  | 50 ------------------
>  arch/powerpc/kernel/process.c                      | 10 +---
>  arch/powerpc/kvm/book3s_64_mmu_radix.c             | 15 +-----
>  arch/powerpc/kvm/book3s_hv.c                       | 10 ----
>  arch/powerpc/kvm/book3s_hv_rmhandlers.S            | 16 +-----
>  arch/powerpc/kvm/book3s_xive_template.c            | 39 +++++---------
>  arch/powerpc/mm/hash_utils_64.c                    | 30 -----------
>  arch/powerpc/mm/hugetlbpage.c                      |  9 ++--
>  arch/powerpc/mm/mmu_context_book3s64.c             | 12 +----
>  arch/powerpc/mm/pgtable-radix.c                    | 60 +---------------------
>  arch/powerpc/mm/tlb-radix.c                        | 18 -------
>  arch/powerpc/perf/core-book3s.c                    | 34 ------------
>  arch/powerpc/perf/isa207-common.c                  | 12 ++---
>  arch/powerpc/perf/isa207-common.h                  |  5 --
>  arch/powerpc/perf/power9-pmu.c                     | 54 +------------------
>  arch/powerpc/platforms/powernv/idle.c              | 27 ----------
>  arch/powerpc/platforms/powernv/smp.c               | 27 ++--------
>  arch/powerpc/sysdev/xive/common.c                  |  8 +--
>  drivers/misc/cxl/cxl.h                             |  8 ---
>  drivers/misc/cxl/cxllib.c                          |  4 --
>  drivers/misc/cxl/pci.c                             | 41 ++++++---------
>  31 files changed, 70 insertions(+), 530 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/hugetlb.h b/arch/powerpc/include/asm/book3s/64/hugetlb.h
> index c459f93..5088838 100644
> --- a/arch/powerpc/include/asm/book3s/64/hugetlb.h
> +++ b/arch/powerpc/include/asm/book3s/64/hugetlb.h
> @@ -32,26 +32,6 @@ static inline int hstate_get_psize(struct hstate *hstate)
>   }
>  }
>  
> -#define arch_make_huge_pte arch_make_huge_pte
> -static inline pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
> -       struct page *page, int writable)
> -{
> - unsigned long page_shift;
> -
> - if (!cpu_has_feature(CPU_FTR_POWER9_DD1))
> - return entry;
> -
> - page_shift = huge_page_shift(hstate_vma(vma));
> - /*
> - * We don't support 1G hugetlb pages yet.
> - */
> - VM_WARN_ON(page_shift == mmu_psize_defs[MMU_PAGE_1G].shift);
> - if (page_shift == mmu_psize_defs[MMU_PAGE_2M].shift)
> - return __pte(pte_val(entry) | R_PAGE_LARGE);
> - else
> - return entry;
> -}
> -
>  #ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE
>  static inline bool gigantic_page_supported(void)
>  {
> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
> index bddf18a..674990c 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> @@ -454,9 +454,8 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
>  {
>   if (full && radix_enabled()) {
>   /*
> - * Let's skip the DD1 style pte update here. We know that
> - * this is a full mm pte clear and hence can be sure there is
> - * no parallel set_pte.
> + * We know that this is a full mm pte clear and
> + * hence can be sure there is no parallel set_pte.
>   */
>   return radix__ptep_get_and_clear_full(mm, addr, ptep, full);
>   }
> diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h
> index 2509344..eaa4591 100644
> --- a/arch/powerpc/include/asm/book3s/64/radix.h
> +++ b/arch/powerpc/include/asm/book3s/64/radix.h
> @@ -12,12 +12,6 @@
>  #include <asm/book3s/64/radix-4k.h>
>  #endif
>  
> -/*
> - * For P9 DD1 only, we need to track whether the pte's huge.
> - */
> -#define R_PAGE_LARGE _RPAGE_RSV1
> -
> -
>  #ifndef __ASSEMBLY__
>  #include <asm/book3s/64/tlbflush-radix.h>
>  #include <asm/cpu_has_feature.h>
> @@ -153,20 +147,7 @@ static inline unsigned long radix__pte_update(struct mm_struct *mm,
>  {
>   unsigned long old_pte;
>  
> - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
> -
> - unsigned long new_pte;
> -
> - old_pte = __radix_pte_update(ptep, ~0ul, 0);
> - /*
> - * new value of pte
> - */
> - new_pte = (old_pte | set) & ~clr;
> - radix__flush_tlb_pte_p9_dd1(old_pte, mm, addr);
> - if (new_pte)
> - __radix_pte_update(ptep, 0, new_pte);
> - } else
> - old_pte = __radix_pte_update(ptep, clr, set);
> + old_pte = __radix_pte_update(ptep, clr, set);
>   if (!huge)
>   assert_pte_locked(mm, addr);
>  
> @@ -241,8 +222,6 @@ static inline int radix__pmd_trans_huge(pmd_t pmd)
>  
>  static inline pmd_t radix__pmd_mkhuge(pmd_t pmd)
>  {
> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
> - return __pmd(pmd_val(pmd) | _PAGE_PTE | R_PAGE_LARGE);
>   return __pmd(pmd_val(pmd) | _PAGE_PTE);
>  }
>  static inline void radix__pmdp_huge_split_prepare(struct vm_area_struct *vma,
> @@ -279,18 +258,14 @@ static inline unsigned long radix__get_tree_size(void)
>   unsigned long rts_field;
>   /*
>   * We support 52 bits, hence:
> - *  DD1    52-28 = 24, 0b11000
> - *  Others 52-31 = 21, 0b10101
> + * bits 52 - 31 = 21, 0b10101
>   * RTS encoding details
>   * bits 0 - 3 of rts -> bits 6 - 8 unsigned long
>   * bits 4 - 5 of rts -> bits 62 - 63 of unsigned long
>   */
> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
> - rts_field = (0x3UL << 61);
> - else {
> - rts_field = (0x5UL << 5); /* 6 - 8 bits */
> - rts_field |= (0x2UL << 61);
> - }
> + rts_field = (0x5UL << 5); /* 6 - 8 bits */
> + rts_field |= (0x2UL << 61);
> +
>   return rts_field;
>  }
>  
> diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h b/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
> index 6a9e680..a0fe684 100644
> --- a/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
> +++ b/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
> @@ -45,6 +45,4 @@ extern void radix__flush_tlb_lpid_va(unsigned long lpid, unsigned long gpa,
>       unsigned long page_size);
>  extern void radix__flush_tlb_lpid(unsigned long lpid);
>  extern void radix__flush_tlb_all(void);
> -extern void radix__flush_tlb_pte_p9_dd1(unsigned long old_pte, struct mm_struct *mm,
> - unsigned long address);
>  #endif
> diff --git a/arch/powerpc/include/asm/cputable.h b/arch/powerpc/include/asm/cputable.h
> index 82ca727..aab3b68 100644
> --- a/arch/powerpc/include/asm/cputable.h
> +++ b/arch/powerpc/include/asm/cputable.h
> @@ -222,7 +222,6 @@ enum {
>  #define CPU_FTR_DAWR LONG_ASM_CONST(0x0000008000000000)
>  #define CPU_FTR_DABRX LONG_ASM_CONST(0x0000010000000000)
>  #define CPU_FTR_PMAO_BUG LONG_ASM_CONST(0x0000020000000000)
> -#define CPU_FTR_POWER9_DD1 LONG_ASM_CONST(0x0000040000000000)
>  #define CPU_FTR_POWER9_DD2_1 LONG_ASM_CONST(0x0000080000000000)
>  #define CPU_FTR_P9_TM_HV_ASSIST LONG_ASM_CONST(0x0000100000000000)
>  #define CPU_FTR_P9_TM_XER_SO_BUG LONG_ASM_CONST(0x0000200000000000)
> @@ -480,8 +479,6 @@ enum {
>      CPU_FTR_DBELL | CPU_FTR_HAS_PPR | CPU_FTR_DAWR | \
>      CPU_FTR_ARCH_207S | CPU_FTR_TM_COMP | CPU_FTR_ARCH_300 | \
>      CPU_FTR_P9_TLBIE_BUG | CPU_FTR_P9_TIDR)
> -#define CPU_FTRS_POWER9_DD1 ((CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD1) & \
> -     (~CPU_FTR_SAO))
>  #define CPU_FTRS_POWER9_DD2_0 CPU_FTRS_POWER9
>  #define CPU_FTRS_POWER9_DD2_1 (CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD2_1)
>  #define CPU_FTRS_POWER9_DD2_2 (CPU_FTRS_POWER9 | CPU_FTR_P9_TM_HV_ASSIST | \
> @@ -505,8 +502,7 @@ enum {
>       CPU_FTRS_POWER6 | CPU_FTRS_POWER7 | CPU_FTRS_POWER8E | \
>       CPU_FTRS_POWER8 | CPU_FTRS_POWER8_DD1 | CPU_FTRS_CELL | \
>       CPU_FTRS_PA6T | CPU_FTR_VSX | CPU_FTRS_POWER9 | \
> -     CPU_FTRS_POWER9_DD1 | CPU_FTRS_POWER9_DD2_1 | \
> -     CPU_FTRS_POWER9_DD2_2)
> +     CPU_FTRS_POWER9_DD2_1 | CPU_FTRS_POWER9_DD2_2)
>  #endif
>  #else
>  enum {
> diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
> index b3ec196..da6a25f 100644
> --- a/arch/powerpc/include/asm/paca.h
> +++ b/arch/powerpc/include/asm/paca.h
> @@ -184,11 +184,6 @@ struct paca_struct {
>   u8 subcore_sibling_mask;
>   /* Flag to request this thread not to stop */
>   atomic_t dont_stop;
> - /*
> - * Pointer to an array which contains pointer
> - * to the sibling threads' paca.
> - */
> - struct paca_struct **thread_sibling_pacas;
>   /* The PSSCR value that the kernel requested before going to stop */
>   u64 requested_psscr;
>  
> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> index a65c54c..7e1cbc8 100644
> --- a/arch/powerpc/kernel/asm-offsets.c
> +++ b/arch/powerpc/kernel/asm-offsets.c
> @@ -754,7 +754,6 @@ int main(void)
>   OFFSET(PACA_THREAD_IDLE_STATE, paca_struct, thread_idle_state);
>   OFFSET(PACA_THREAD_MASK, paca_struct, thread_mask);
>   OFFSET(PACA_SUBCORE_SIBLING_MASK, paca_struct, subcore_sibling_mask);
> - OFFSET(PACA_SIBLING_PACA_PTRS, paca_struct, thread_sibling_pacas);
>   OFFSET(PACA_REQ_PSSCR, paca_struct, requested_psscr);
>   OFFSET(PACA_DONT_STOP, paca_struct, dont_stop);
>  #define STOP_SPR(x, f) OFFSET(x, paca_struct, stop_sprs.f)
> diff --git a/arch/powerpc/kernel/cputable.c b/arch/powerpc/kernel/cputable.c
> index bc2b461..13acd1c 100644
> --- a/arch/powerpc/kernel/cputable.c
> +++ b/arch/powerpc/kernel/cputable.c
> @@ -527,26 +527,6 @@ static struct cpu_spec __initdata cpu_specs[] = {
>   .machine_check_early = __machine_check_early_realmode_p8,
>   .platform = "power8",
>   },
> - { /* Power9 DD1*/
> - .pvr_mask = 0xffffff00,
> - .pvr_value = 0x004e0100,
> - .cpu_name = "POWER9 (raw)",
> - .cpu_features = CPU_FTRS_POWER9_DD1,
> - .cpu_user_features = COMMON_USER_POWER9,
> - .cpu_user_features2 = COMMON_USER2_POWER9,
> - .mmu_features = MMU_FTRS_POWER9,
> - .icache_bsize = 128,
> - .dcache_bsize = 128,
> - .num_pmcs = 6,
> - .pmc_type = PPC_PMC_IBM,
> - .oprofile_cpu_type = "ppc64/power9",
> - .oprofile_type = PPC_OPROFILE_INVALID,
> - .cpu_setup = __setup_cpu_power9,
> - .cpu_restore = __restore_cpu_power9,
> - .flush_tlb = __flush_tlb_power9,
> - .machine_check_early = __machine_check_early_realmode_p9,
> - .platform = "power9",
> - },
>   { /* Power9 DD2.0 */
>   .pvr_mask = 0xffffefff,
>   .pvr_value = 0x004e0200,
> diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
> index fa7f063..350ea04 100644
> --- a/arch/powerpc/kernel/dt_cpu_ftrs.c
> +++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
> @@ -741,13 +741,16 @@ static __init void cpufeatures_cpu_quirks(void)
>   /*
>   * Not all quirks can be derived from the cpufeatures device tree.
>   */
> - if ((version & 0xffffff00) == 0x004e0100)
> - cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD1;
> + if ((version & 0xffffefff) == 0x004e0200)
> + ; /* DD2.0 has no feature flag */
>   else if ((version & 0xffffefff) == 0x004e0201)
>   cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1;
> - else if ((version & 0xffffefff) == 0x004e0202)
> - cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_HV_ASSIST |
> - CPU_FTR_P9_TM_XER_SO_BUG;
> + else if ((version & 0xffffefff) == 0x004e0202) {
> + cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_HV_ASSIST;
> + cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_XER_SO_BUG;
> + cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1;
> + } else /* DD2.1 and up have DD2_1 */
> + cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1;
>  
>   if ((version & 0xffff0000) == 0x004e0000) {
>   cur_cpu_spec->cpu_features |= CPU_FTR_P9_TLBIE_BUG;
> diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
> index 59f5cfa..724bd35 100644
> --- a/arch/powerpc/kernel/exceptions-64s.S
> +++ b/arch/powerpc/kernel/exceptions-64s.S
> @@ -276,9 +276,7 @@ BEGIN_FTR_SECTION
>   *
>   * This interrupt can wake directly from idle. If that is the case,
>   * the machine check is handled then the idle wakeup code is called
> - * to restore state. In that case, the POWER9 DD1 idle PACA workaround
> - * is not applied in the early machine check code, which will cause
> - * bugs.
> + * to restore state.
>   */
>   mr r11,r1 /* Save r1 */
>   lhz r10,PACA_IN_MCE(r13)
> diff --git a/arch/powerpc/kernel/idle_book3s.S b/arch/powerpc/kernel/idle_book3s.S
> index f3ac31c..49439fc 100644
> --- a/arch/powerpc/kernel/idle_book3s.S
> +++ b/arch/powerpc/kernel/idle_book3s.S
> @@ -455,43 +455,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_P9_TM_XER_SO_BUG)
>   blr /* return 0 for wakeup cause / SRR1 value */
>  
>  /*
> - * On waking up from stop 0,1,2 with ESL=1 on POWER9 DD1,
> - * HSPRG0 will be set to the HSPRG0 value of one of the
> - * threads in this core. Thus the value we have in r13
> - * may not be this thread's paca pointer.
> - *
> - * Fortunately, the TIR remains invariant. Since this thread's
> - * paca pointer is recorded in all its sibling's paca, we can
> - * correctly recover this thread's paca pointer if we
> - * know the index of this thread in the core.
> - *
> - * This index can be obtained from the TIR.
> - *
> - * i.e, thread's position in the core = TIR.
> - * If this value is i, then this thread's paca is
> - * paca->thread_sibling_pacas[i].
> - */
> -power9_dd1_recover_paca:
> - mfspr r4, SPRN_TIR
> - /*
> - * Since each entry in thread_sibling_pacas is 8 bytes
> - * we need to left-shift by 3 bits. Thus r4 = i * 8
> - */
> - sldi r4, r4, 3
> - /* Get &paca->thread_sibling_pacas[0] in r5 */
> - ld r5, PACA_SIBLING_PACA_PTRS(r13)
> - /* Load paca->thread_sibling_pacas[i] into r13 */
> - ldx r13, r4, r5
> - SET_PACA(r13)
> - /*
> - * Indicate that we have lost NVGPR state
> - * which needs to be restored from the stack.
> - */
> - li r3, 1
> - stb r3,PACA_NAPSTATELOST(r13)
> - blr
> -
> -/*
>   * Called from machine check handler for powersave wakeups.
>   * Low level machine check processing has already been done. Now just
>   * go through the wake up path to get everything in order.
> @@ -525,9 +488,6 @@ pnv_powersave_wakeup:
>   ld r2, PACATOC(r13)
>  
>  BEGIN_FTR_SECTION
> -BEGIN_FTR_SECTION_NESTED(70)
> - bl power9_dd1_recover_paca
> -END_FTR_SECTION_NESTED_IFSET(CPU_FTR_POWER9_DD1, 70)
>   bl pnv_restore_hyp_resource_arch300
>  FTR_SECTION_ELSE
>   bl pnv_restore_hyp_resource_arch207
> @@ -587,22 +547,12 @@ END_FTR_SECTION_IFCLR(CPU_FTR_POWER9_DD2_1)
>   LOAD_REG_ADDRBASE(r5,pnv_first_deep_stop_state)
>   ld r4,ADDROFF(pnv_first_deep_stop_state)(r5)
>  
> -BEGIN_FTR_SECTION_NESTED(71)
> - /*
> - * Assume that we are waking up from the state
> - * same as the Requested Level (RL) in the PSSCR
> - * which are Bits 60-63
> - */
> - ld r5,PACA_REQ_PSSCR(r13)
> - rldicl  r5,r5,0,60
> -FTR_SECTION_ELSE_NESTED(71)
>   /*
>   * 0-3 bits correspond to Power-Saving Level Status
>   * which indicates the idle state we are waking up from
>   */
>   mfspr r5, SPRN_PSSCR
>   rldicl  r5,r5,4,60
> -ALT_FTR_SECTION_END_NESTED_IFSET(CPU_FTR_POWER9_DD1, 71)
>   li r0, 0 /* clear requested_psscr to say we're awake */
>   std r0, PACA_REQ_PSSCR(r13)
>   cmpd cr4,r5,r4
> diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
> index 83478a9..e73a80d 100644
> --- a/arch/powerpc/kernel/process.c
> +++ b/arch/powerpc/kernel/process.c
> @@ -1247,17 +1247,9 @@ struct task_struct *__switch_to(struct task_struct *prev,
>   * mappings. If the new process has the foreign real address
>   * mappings, we must issue a cp_abort to clear any state and
>   * prevent snooping, corruption or a covert channel.
> - *
> - * DD1 allows paste into normal system memory so we do an
> - * unpaired copy, rather than cp_abort, to clear the buffer,
> - * since cp_abort is quite expensive.
>   */
> - if (current_thread_info()->task->thread.used_vas) {
> + if (current_thread_info()->task->thread.used_vas)
>   asm volatile(PPC_CP_ABORT);
> - } else if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
> - asm volatile(PPC_COPY(%0, %1)
> - : : "r"(dummy_copy_buffer), "r"(0));
> - }
>   }
>  #endif /* CONFIG_PPC_BOOK3S_64 */
>  
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> index 5d9bafe..dd8980f 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
> @@ -66,10 +66,7 @@ int kvmppc_mmu_radix_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
>   bits = root & RPDS_MASK;
>   root = root & RPDB_MASK;
>  
> - /* P9 DD1 interprets RTS (radix tree size) differently */
>   offset = rts + 31;
> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
> - offset -= 3;
>  
>   /* current implementations only support 52-bit space */
>   if (offset != 52)
> @@ -167,17 +164,7 @@ unsigned long kvmppc_radix_update_pte(struct kvm *kvm, pte_t *ptep,
>        unsigned long clr, unsigned long set,
>        unsigned long addr, unsigned int shift)
>  {
> - unsigned long old = 0;
> -
> - if (!(clr & _PAGE_PRESENT) && cpu_has_feature(CPU_FTR_POWER9_DD1) &&
> -    pte_present(*ptep)) {
> - /* have to invalidate it first */
> - old = __radix_pte_update(ptep, _PAGE_PRESENT, 0);
> - kvmppc_radix_tlbie_page(kvm, addr, shift);
> - set |= _PAGE_PRESENT;
> - old &= _PAGE_PRESENT;
> - }
> - return __radix_pte_update(ptep, clr, set) | old;
> + return __radix_pte_update(ptep, clr, set);
>  }
>  
>  void kvmppc_radix_set_pte_at(struct kvm *kvm, unsigned long addr,
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index dc9eb6b..51278f8 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -1662,14 +1662,6 @@ static int kvmppc_set_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
>   r = set_vpa(vcpu, &vcpu->arch.dtl, addr, len);
>   break;
>   case KVM_REG_PPC_TB_OFFSET:
> - /*
> - * POWER9 DD1 has an erratum where writing TBU40 causes
> - * the timebase to lose ticks.  So we don't let the
> - * timebase offset be changed on P9 DD1.  (It is
> - * initialized to zero.)
> - */
> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
> - break;
>   /* round up to multiple of 2^24 */
>   vcpu->arch.vcore->tb_offset =
>   ALIGN(set_reg_val(id, *val), 1UL << 24);
> @@ -1987,8 +1979,6 @@ static struct kvm_vcpu *kvmppc_core_vcpu_create_hv(struct kvm *kvm,
>   /*
>   * Set the default HFSCR for the guest from the host value.
>   * This value is only used on POWER9.
> - * On POWER9 DD1, TM doesn't work, so we make sure to
> - * prevent the guest from using it.
>   * On POWER9, we want to virtualize the doorbell facility, so we
>   * turn off the HFSCR bit, which causes those instructions to trap.
>   */
> diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> index 293a659..1c35836 100644
> --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
> @@ -907,9 +907,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
>   mtspr SPRN_PID, r7
>   mtspr SPRN_WORT, r8
>  BEGIN_FTR_SECTION
> - PPC_INVALIDATE_ERAT
> -END_FTR_SECTION_IFSET(CPU_FTR_POWER9_DD1)
> -BEGIN_FTR_SECTION
>   /* POWER8-only registers */
>   ld r5, VCPU_TCSCR(r4)
>   ld r6, VCPU_ACOP(r4)
> @@ -1849,7 +1846,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
>   ld r5, VCPU_KVM(r9)
>   lbz r0, KVM_RADIX(r5)
>   cmpwi cr2, r0, 0
> - beq cr2, 4f
> + beq cr2, 2f
>  
>   /* Radix: Handle the case where the guest used an illegal PID */
>   LOAD_REG_ADDR(r4, mmu_base_pid)
> @@ -1881,11 +1878,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
>   bdnz 1b
>   ptesync
>  
> -2: /* Flush the ERAT on radix P9 DD1 guest exit */
> -BEGIN_FTR_SECTION
> - PPC_INVALIDATE_ERAT
> -END_FTR_SECTION_IFSET(CPU_FTR_POWER9_DD1)
> -4:
> +2:
>  #endif /* CONFIG_PPC_RADIX_MMU */
>  
>   /*
> @@ -3432,11 +3425,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300)
>   mtspr SPRN_CIABR, r0
>   mtspr SPRN_DAWRX, r0
>  
> - /* Flush the ERAT on radix P9 DD1 guest exit */
> -BEGIN_FTR_SECTION
> - PPC_INVALIDATE_ERAT
> -END_FTR_SECTION_IFSET(CPU_FTR_POWER9_DD1)
> -
>  BEGIN_MMU_FTR_SECTION
>   b 4f
>  END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_RADIX)
> diff --git a/arch/powerpc/kvm/book3s_xive_template.c b/arch/powerpc/kvm/book3s_xive_template.c
> index c7a5dea..3191961 100644
> --- a/arch/powerpc/kvm/book3s_xive_template.c
> +++ b/arch/powerpc/kvm/book3s_xive_template.c
> @@ -22,18 +22,6 @@ static void GLUE(X_PFX,ack_pending)(struct kvmppc_xive_vcpu *xc)
>   */
>   eieio();
>  
> - /*
> - * DD1 bug workaround: If PIPR is less favored than CPPR
> - * ignore the interrupt or we might incorrectly lose an IPB
> - * bit.
> - */
> - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
> - __be64 qw1 = __x_readq(__x_tima + TM_QW1_OS);
> - u8 pipr = be64_to_cpu(qw1) & 0xff;
> - if (pipr >= xc->hw_cppr)
> - return;
> - }
> -
>   /* Perform the acknowledge OS to register cycle. */
>   ack = be16_to_cpu(__x_readw(__x_tima + TM_SPC_ACK_OS_REG));
>  
> @@ -86,8 +74,15 @@ static void GLUE(X_PFX,source_eoi)(u32 hw_irq, struct xive_irq_data *xd)
>   /* If the XIVE supports the new "store EOI facility, use it */
>   if (xd->flags & XIVE_IRQ_FLAG_STORE_EOI)
>   __x_writeq(0, __x_eoi_page(xd) + XIVE_ESB_STORE_EOI);
> - else if (hw_irq && xd->flags & XIVE_IRQ_FLAG_EOI_FW) {
> + else if (hw_irq && xd->flags & XIVE_IRQ_FLAG_EOI_FW)
>   opal_int_eoi(hw_irq);
> + else if (xd->flags & XIVE_IRQ_FLAG_LSI) {
> + /*
> + * For LSIs the HW EOI cycle is used rather than PQ bits,
> + * as they are automatically re-triggred in HW when still
> + * pending.
> + */
> + __x_readq(__x_eoi_page(xd) + XIVE_ESB_LOAD_EOI);
>   } else {
>   uint64_t eoi_val;
>  
> @@ -99,20 +94,12 @@ static void GLUE(X_PFX,source_eoi)(u32 hw_irq, struct xive_irq_data *xd)
>   *
>   * This allows us to then do a re-trigger if Q was set
>   * rather than synthetizing an interrupt in software
> - *
> - * For LSIs, using the HW EOI cycle works around a problem
> - * on P9 DD1 PHBs where the other ESB accesses don't work
> - * properly.
>   */
> - if (xd->flags & XIVE_IRQ_FLAG_LSI)
> - __x_readq(__x_eoi_page(xd) + XIVE_ESB_LOAD_EOI);
> - else {
> - eoi_val = GLUE(X_PFX,esb_load)(xd, XIVE_ESB_SET_PQ_00);
> -
> - /* Re-trigger if needed */
> - if ((eoi_val & 1) && __x_trig_page(xd))
> - __x_writeq(0, __x_trig_page(xd));
> - }
> + eoi_val = GLUE(X_PFX,esb_load)(xd, XIVE_ESB_SET_PQ_00);
> +
> + /* Re-trigger if needed */
> + if ((eoi_val & 1) && __x_trig_page(xd))
> + __x_writeq(0, __x_trig_page(xd));
>   }
>  }
>  
> diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
> index db84680..06574b4 100644
> --- a/arch/powerpc/mm/hash_utils_64.c
> +++ b/arch/powerpc/mm/hash_utils_64.c
> @@ -802,31 +802,6 @@ int hash__remove_section_mapping(unsigned long start, unsigned long end)
>  }
>  #endif /* CONFIG_MEMORY_HOTPLUG */
>  
> -static void update_hid_for_hash(void)
> -{
> - unsigned long hid0;
> - unsigned long rb = 3UL << PPC_BITLSHIFT(53); /* IS = 3 */
> -
> - asm volatile("ptesync": : :"memory");
> - /* prs = 0, ric = 2, rs = 0, r = 1 is = 3 */
> - asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
> -     : : "r"(rb), "i"(0), "i"(0), "i"(2), "r"(0) : "memory");
> - asm volatile("eieio; tlbsync; ptesync; isync; slbia": : :"memory");
> - trace_tlbie(0, 0, rb, 0, 2, 0, 0);
> -
> - /*
> - * now switch the HID
> - */
> - hid0  = mfspr(SPRN_HID0);
> - hid0 &= ~HID0_POWER9_RADIX;
> - mtspr(SPRN_HID0, hid0);
> - asm volatile("isync": : :"memory");
> -
> - /* Wait for it to happen */
> - while ((mfspr(SPRN_HID0) & HID0_POWER9_RADIX))
> - cpu_relax();
> -}
> -
>  static void __init hash_init_partition_table(phys_addr_t hash_table,
>       unsigned long htab_size)
>  {
> @@ -839,8 +814,6 @@ static void __init hash_init_partition_table(phys_addr_t hash_table,
>   htab_size =  __ilog2(htab_size) - 18;
>   mmu_partition_table_set_entry(0, hash_table | htab_size, 0);
>   pr_info("Partition table %p\n", partition_tb);
> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
> - update_hid_for_hash();
>  }
>  
>  static void __init htab_initialize(void)
> @@ -1063,9 +1036,6 @@ void hash__early_init_mmu_secondary(void)
>   /* Initialize hash table for that CPU */
>   if (!firmware_has_feature(FW_FEATURE_LPAR)) {
>  
> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
> - update_hid_for_hash();
> -
>   if (!cpu_has_feature(CPU_FTR_ARCH_300))
>   mtspr(SPRN_SDR1, _SDR1);
>   else
> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
> index 79e1378..6f7b831 100644
> --- a/arch/powerpc/mm/hugetlbpage.c
> +++ b/arch/powerpc/mm/hugetlbpage.c
> @@ -609,15 +609,12 @@ static int __init add_huge_page_size(unsigned long long size)
>   * firmware we only add hugetlb support for page sizes that can be
>   * supported by linux page table layout.
>   * For now we have
> - * Radix: 2M
> + * Radix: 2M and 1G
>   * Hash: 16M and 16G
>   */
>   if (radix_enabled()) {
> - if (mmu_psize != MMU_PAGE_2M) {
> - if (cpu_has_feature(CPU_FTR_POWER9_DD1) ||
> -    (mmu_psize != MMU_PAGE_1G))
> - return -EINVAL;
> - }
> + if (mmu_psize != MMU_PAGE_2M && mmu_psize != MMU_PAGE_1G)
> + return -EINVAL;
>   } else {
>   if (mmu_psize != MMU_PAGE_16M && mmu_psize != MMU_PAGE_16G)
>   return -EINVAL;
> diff --git a/arch/powerpc/mm/mmu_context_book3s64.c b/arch/powerpc/mm/mmu_context_book3s64.c
> index 5066276..208f687 100644
> --- a/arch/powerpc/mm/mmu_context_book3s64.c
> +++ b/arch/powerpc/mm/mmu_context_book3s64.c
> @@ -250,15 +250,7 @@ void arch_exit_mmap(struct mm_struct *mm)
>  #ifdef CONFIG_PPC_RADIX_MMU
>  void radix__switch_mmu_context(struct mm_struct *prev, struct mm_struct *next)
>  {
> -
> - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
> - isync();
> - mtspr(SPRN_PID, next->context.id);
> - isync();
> - asm volatile(PPC_INVALIDATE_ERAT : : :"memory");
> - } else {
> - mtspr(SPRN_PID, next->context.id);
> - isync();
> - }
> + mtspr(SPRN_PID, next->context.id);
> + isync();
>  }
>  #endif
> diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
> index a778560..704362d 100644
> --- a/arch/powerpc/mm/pgtable-radix.c
> +++ b/arch/powerpc/mm/pgtable-radix.c
> @@ -171,16 +171,6 @@ void radix__mark_rodata_ro(void)
>  {
>   unsigned long start, end;
>  
> - /*
> - * mark_rodata_ro() will mark itself as !writable at some point.
> - * Due to DD1 workaround in radix__pte_update(), we'll end up with
> - * an invalid pte and the system will crash quite severly.
> - */
> - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
> - pr_warn("Warning: Unable to mark rodata read only on P9 DD1\n");
> - return;
> - }
> -
>   start = (unsigned long)_stext;
>   end = (unsigned long)__init_begin;
>  
> @@ -470,35 +460,6 @@ void __init radix__early_init_devtree(void)
>   return;
>  }
>  
> -static void update_hid_for_radix(void)
> -{
> - unsigned long hid0;
> - unsigned long rb = 3UL << PPC_BITLSHIFT(53); /* IS = 3 */
> -
> - asm volatile("ptesync": : :"memory");
> - /* prs = 0, ric = 2, rs = 0, r = 1 is = 3 */
> - asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
> -     : : "r"(rb), "i"(1), "i"(0), "i"(2), "r"(0) : "memory");
> - /* prs = 1, ric = 2, rs = 0, r = 1 is = 3 */
> - asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
> -     : : "r"(rb), "i"(1), "i"(1), "i"(2), "r"(0) : "memory");
> - asm volatile("eieio; tlbsync; ptesync; isync; slbia": : :"memory");
> - trace_tlbie(0, 0, rb, 0, 2, 0, 1);
> - trace_tlbie(0, 0, rb, 0, 2, 1, 1);
> -
> - /*
> - * now switch the HID
> - */
> - hid0  = mfspr(SPRN_HID0);
> - hid0 |= HID0_POWER9_RADIX;
> - mtspr(SPRN_HID0, hid0);
> - asm volatile("isync": : :"memory");
> -
> - /* Wait for it to happen */
> - while (!(mfspr(SPRN_HID0) & HID0_POWER9_RADIX))
> - cpu_relax();
> -}
> -
>  static void radix_init_amor(void)
>  {
>   /*
> @@ -513,22 +474,12 @@ static void radix_init_amor(void)
>  
>  static void radix_init_iamr(void)
>  {
> - unsigned long iamr;
> -
> - /*
> - * The IAMR should set to 0 on DD1.
> - */
> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
> - iamr = 0;
> - else
> - iamr = (1ul << 62);
> -
>   /*
>   * Radix always uses key0 of the IAMR to determine if an access is
>   * allowed. We set bit 0 (IBM bit 1) of key0, to prevent instruction
>   * fetch.
>   */
> - mtspr(SPRN_IAMR, iamr);
> + mtspr(SPRN_IAMR, (1ul << 62));
>  }
>  
>  void __init radix__early_init_mmu(void)
> @@ -583,8 +534,6 @@ void __init radix__early_init_mmu(void)
>  
>   if (!firmware_has_feature(FW_FEATURE_LPAR)) {
>   radix_init_native();
> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
> - update_hid_for_radix();
>   lpcr = mfspr(SPRN_LPCR);
>   mtspr(SPRN_LPCR, lpcr | LPCR_UPRT | LPCR_HR);
>   radix_init_partition_table();
> @@ -608,10 +557,6 @@ void radix__early_init_mmu_secondary(void)
>   * update partition table control register and UPRT
>   */
>   if (!firmware_has_feature(FW_FEATURE_LPAR)) {
> -
> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
> - update_hid_for_radix();
> -
>   lpcr = mfspr(SPRN_LPCR);
>   mtspr(SPRN_LPCR, lpcr | LPCR_UPRT | LPCR_HR);
>  
> @@ -1029,8 +974,7 @@ void radix__ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep,
>   * To avoid NMMU hang while relaxing access, we need mark
>   * the pte invalid in between.
>   */
> - if (cpu_has_feature(CPU_FTR_POWER9_DD1) ||
> -    atomic_read(&mm->context.copros) > 0) {
> + if (atomic_read(&mm->context.copros) > 0) {
>   unsigned long old_pte, new_pte;
>  
>   old_pte = __radix_pte_update(ptep, ~0, 0);
> diff --git a/arch/powerpc/mm/tlb-radix.c b/arch/powerpc/mm/tlb-radix.c
> index c07c2f0..b0cad4f 100644
> --- a/arch/powerpc/mm/tlb-radix.c
> +++ b/arch/powerpc/mm/tlb-radix.c
> @@ -658,24 +658,6 @@ void radix__flush_tlb_all(void)
>   asm volatile("eieio; tlbsync; ptesync": : :"memory");
>  }
>  
> -void radix__flush_tlb_pte_p9_dd1(unsigned long old_pte, struct mm_struct *mm,
> - unsigned long address)
> -{
> - /*
> - * We track page size in pte only for DD1, So we can
> - * call this only on DD1.
> - */
> - if (!cpu_has_feature(CPU_FTR_POWER9_DD1)) {
> - VM_WARN_ON(1);
> - return;
> - }
> -
> - if (old_pte & R_PAGE_LARGE)
> - radix__flush_tlb_page_psize(mm, address, MMU_PAGE_2M);
> - else
> - radix__flush_tlb_page_psize(mm, address, mmu_virtual_psize);
> -}
> -
>  #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
>  extern void radix_kvm_prefetch_workaround(struct mm_struct *mm)
>  {
> diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
> index b7a6044..8ce6673 100644
> --- a/arch/powerpc/perf/core-book3s.c
> +++ b/arch/powerpc/perf/core-book3s.c
> @@ -128,10 +128,6 @@ static inline void power_pmu_bhrb_disable(struct perf_event *event) {}
>  static void power_pmu_sched_task(struct perf_event_context *ctx, bool sched_in) {}
>  static inline void power_pmu_bhrb_read(struct cpu_hw_events *cpuhw) {}
>  static void pmao_restore_workaround(bool ebb) { }
> -static bool use_ic(u64 event)
> -{
> - return false;
> -}
>  #endif /* CONFIG_PPC32 */
>  
>  static bool regs_use_siar(struct pt_regs *regs)
> @@ -710,14 +706,6 @@ static void pmao_restore_workaround(bool ebb)
>   mtspr(SPRN_PMC6, pmcs[5]);
>  }
>  
> -static bool use_ic(u64 event)
> -{
> - if (cpu_has_feature(CPU_FTR_POWER9_DD1) &&
> - (event == 0x200f2 || event == 0x300f2))
> - return true;
> -
> - return false;
> -}
>  #endif /* CONFIG_PPC64 */
>  
>  static void perf_event_interrupt(struct pt_regs *regs);
> @@ -1042,7 +1030,6 @@ static u64 check_and_compute_delta(u64 prev, u64 val)
>  static void power_pmu_read(struct perf_event *event)
>  {
>   s64 val, delta, prev;
> - struct cpu_hw_events *cpuhw = this_cpu_ptr(&cpu_hw_events);
>  
>   if (event->hw.state & PERF_HES_STOPPED)
>   return;
> @@ -1052,13 +1039,6 @@ static void power_pmu_read(struct perf_event *event)
>  
>   if (is_ebb_event(event)) {
>   val = read_pmc(event->hw.idx);
> - if (use_ic(event->attr.config)) {
> - val = mfspr(SPRN_IC);
> - if (val > cpuhw->ic_init)
> - val = val - cpuhw->ic_init;
> - else
> - val = val + (0 - cpuhw->ic_init);
> - }
>   local64_set(&event->hw.prev_count, val);
>   return;
>   }
> @@ -1072,13 +1052,6 @@ static void power_pmu_read(struct perf_event *event)
>   prev = local64_read(&event->hw.prev_count);
>   barrier();
>   val = read_pmc(event->hw.idx);
> - if (use_ic(event->attr.config)) {
> - val = mfspr(SPRN_IC);
> - if (val > cpuhw->ic_init)
> - val = val - cpuhw->ic_init;
> - else
> - val = val + (0 - cpuhw->ic_init);
> - }
>   delta = check_and_compute_delta(prev, val);
>   if (!delta)
>   return;
> @@ -1531,13 +1504,6 @@ static int power_pmu_add(struct perf_event *event, int ef_flags)
>   event->attr.branch_sample_type);
>   }
>  
> - /*
> - * Workaround for POWER9 DD1 to use the Instruction Counter
> - * register value for instruction counting
> - */
> - if (use_ic(event->attr.config))
> - cpuhw->ic_init = mfspr(SPRN_IC);
> -
>   perf_pmu_enable(event->pmu);
>   local_irq_restore(flags);
>   return ret;
> diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c
> index 2efee3f..177de81 100644
> --- a/arch/powerpc/perf/isa207-common.c
> +++ b/arch/powerpc/perf/isa207-common.c
> @@ -59,7 +59,7 @@ static bool is_event_valid(u64 event)
>  {
>   u64 valid_mask = EVENT_VALID_MASK;
>  
> - if (cpu_has_feature(CPU_FTR_ARCH_300) && !cpu_has_feature(CPU_FTR_POWER9_DD1))
> + if (cpu_has_feature(CPU_FTR_ARCH_300))
>   valid_mask = p9_EVENT_VALID_MASK;
>  
>   return !(event & ~valid_mask);
> @@ -86,8 +86,6 @@ static void mmcra_sdar_mode(u64 event, unsigned long *mmcra)
>   * Incase of Power9:
>   * Marked event: MMCRA[SDAR_MODE] will be set to 0b00 ('No Updates'),
>   *               or if group already have any marked events.
> - * Non-Marked events (for DD1):
> - * MMCRA[SDAR_MODE] will be set to 0b01
>   * For rest
>   * MMCRA[SDAR_MODE] will be set from event code.
>   *      If sdar_mode from event is zero, default to 0b01. Hardware
> @@ -96,7 +94,7 @@ static void mmcra_sdar_mode(u64 event, unsigned long *mmcra)
>   if (cpu_has_feature(CPU_FTR_ARCH_300)) {
>   if (is_event_marked(event) || (*mmcra & MMCRA_SAMPLE_ENABLE))
>   *mmcra &= MMCRA_SDAR_MODE_NO_UPDATES;
> - else if (!cpu_has_feature(CPU_FTR_POWER9_DD1) && p9_SDAR_MODE(event))
> + else if (p9_SDAR_MODE(event))
>   *mmcra |=  p9_SDAR_MODE(event) << MMCRA_SDAR_MODE_SHIFT;
>   else
>   *mmcra |= MMCRA_SDAR_MODE_DCACHE;
> @@ -106,7 +104,7 @@ static void mmcra_sdar_mode(u64 event, unsigned long *mmcra)
>  
>  static u64 thresh_cmp_val(u64 value)
>  {
> - if (cpu_has_feature(CPU_FTR_ARCH_300) && !cpu_has_feature(CPU_FTR_POWER9_DD1))
> + if (cpu_has_feature(CPU_FTR_ARCH_300))
>   return value << p9_MMCRA_THR_CMP_SHIFT;
>  
>   return value << MMCRA_THR_CMP_SHIFT;
> @@ -114,7 +112,7 @@ static u64 thresh_cmp_val(u64 value)
>  
>  static unsigned long combine_from_event(u64 event)
>  {
> - if (cpu_has_feature(CPU_FTR_ARCH_300) && !cpu_has_feature(CPU_FTR_POWER9_DD1))
> + if (cpu_has_feature(CPU_FTR_ARCH_300))
>   return p9_EVENT_COMBINE(event);
>  
>   return EVENT_COMBINE(event);
> @@ -122,7 +120,7 @@ static unsigned long combine_from_event(u64 event)
>  
>  static unsigned long combine_shift(unsigned long pmc)
>  {
> - if (cpu_has_feature(CPU_FTR_ARCH_300) && !cpu_has_feature(CPU_FTR_POWER9_DD1))
> + if (cpu_has_feature(CPU_FTR_ARCH_300))
>   return p9_MMCR1_COMBINE_SHIFT(pmc);
>  
>   return MMCR1_COMBINE_SHIFT(pmc);
> diff --git a/arch/powerpc/perf/isa207-common.h b/arch/powerpc/perf/isa207-common.h
> index 6c737d6..479dec2 100644
> --- a/arch/powerpc/perf/isa207-common.h
> +++ b/arch/powerpc/perf/isa207-common.h
> @@ -222,11 +222,6 @@
>   CNST_PMC_VAL(1) | CNST_PMC_VAL(2) | CNST_PMC_VAL(3) | \
>   CNST_PMC_VAL(4) | CNST_PMC_VAL(5) | CNST_PMC_VAL(6) | CNST_NC_VAL
>  
> -/*
> - * Lets restrict use of PMC5 for instruction counting.
> - */
> -#define P9_DD1_TEST_ADDER (ISA207_TEST_ADDER | CNST_PMC_VAL(5))
> -
>  /* Bits in MMCR1 for PowerISA v2.07 */
>  #define MMCR1_UNIT_SHIFT(pmc) (60 - (4 * ((pmc) - 1)))
>  #define MMCR1_COMBINE_SHIFT(pmc) (35 - ((pmc) - 1))
> diff --git a/arch/powerpc/perf/power9-pmu.c b/arch/powerpc/perf/power9-pmu.c
> index 24b5b5b..3d055c8 100644
> --- a/arch/powerpc/perf/power9-pmu.c
> +++ b/arch/powerpc/perf/power9-pmu.c
> @@ -183,12 +183,6 @@ static struct attribute_group power9_pmu_events_group = {
>   .attrs = power9_events_attr,
>  };
>  
> -static const struct attribute_group *power9_isa207_pmu_attr_groups[] = {
> - &isa207_pmu_format_group,
> - &power9_pmu_events_group,
> - NULL,
> -};
> -
>  PMU_FORMAT_ATTR(event, "config:0-51");
>  PMU_FORMAT_ATTR(pmcxsel, "config:0-7");
>  PMU_FORMAT_ATTR(mark, "config:8");
> @@ -231,17 +225,6 @@ static const struct attribute_group *power9_pmu_attr_groups[] = {
>   NULL,
>  };
>  
> -static int power9_generic_events_dd1[] = {
> - [PERF_COUNT_HW_CPU_CYCLES] = PM_CYC,
> - [PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = PM_ICT_NOSLOT_CYC,
> - [PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = PM_CMPLU_STALL,
> - [PERF_COUNT_HW_INSTRUCTIONS] = PM_INST_DISP,
> - [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = PM_BR_CMPL_ALT,
> - [PERF_COUNT_HW_BRANCH_MISSES] = PM_BR_MPRED_CMPL,
> - [PERF_COUNT_HW_CACHE_REFERENCES] = PM_LD_REF_L1,
> - [PERF_COUNT_HW_CACHE_MISSES] = PM_LD_MISS_L1_FIN,
> -};
> -
>  static int power9_generic_events[] = {
>   [PERF_COUNT_HW_CPU_CYCLES] = PM_CYC,
>   [PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = PM_ICT_NOSLOT_CYC,
> @@ -403,25 +386,6 @@ static int power9_cache_events[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = {
>  
>  #undef C
>  
> -static struct power_pmu power9_isa207_pmu = {
> - .name = "POWER9",
> - .n_counter = MAX_PMU_COUNTERS,
> - .add_fields = ISA207_ADD_FIELDS,
> - .test_adder = P9_DD1_TEST_ADDER,
> - .compute_mmcr = isa207_compute_mmcr,
> - .config_bhrb = power9_config_bhrb,
> - .bhrb_filter_map = power9_bhrb_filter_map,
> - .get_constraint = isa207_get_constraint,
> - .get_alternatives = power9_get_alternatives,
> - .disable_pmc = isa207_disable_pmc,
> - .flags = PPMU_NO_SIAR | PPMU_ARCH_207S,
> - .n_generic = ARRAY_SIZE(power9_generic_events_dd1),
> - .generic_events = power9_generic_events_dd1,
> - .cache_events = &power9_cache_events,
> - .attr_groups = power9_isa207_pmu_attr_groups,
> - .bhrb_nr = 32,
> -};
> -
>  static struct power_pmu power9_pmu = {
>   .name = "POWER9",
>   .n_counter = MAX_PMU_COUNTERS,
> @@ -452,23 +416,7 @@ static int __init init_power9_pmu(void)
>      strcmp(cur_cpu_spec->oprofile_cpu_type, "ppc64/power9"))
>   return -ENODEV;
>  
> - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
> - /*
> - * Since PM_INST_CMPL may not provide right counts in all
> - * sampling scenarios in power9 DD1, instead use PM_INST_DISP.
> - */
> - EVENT_VAR(PM_INST_CMPL, _g).id = PM_INST_DISP;
> - /*
> - * Power9 DD1 should use PM_BR_CMPL_ALT event code for
> - * "branches" to provide correct counter value.
> - */
> - EVENT_VAR(PM_BR_CMPL, _g).id = PM_BR_CMPL_ALT;
> - EVENT_VAR(PM_BR_CMPL, _c).id = PM_BR_CMPL_ALT;
> - rc = register_power_pmu(&power9_isa207_pmu);
> - } else {
> - rc = register_power_pmu(&power9_pmu);
> - }
> -
> + rc = register_power_pmu(&power9_pmu);
>   if (rc)
>   return rc;
>  
> diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
> index 3776a58..113d647 100644
> --- a/arch/powerpc/platforms/powernv/idle.c
> +++ b/arch/powerpc/platforms/powernv/idle.c
> @@ -177,11 +177,6 @@ static void pnv_alloc_idle_core_states(void)
>   paca[cpu].core_idle_state_ptr = core_idle_state;
>   paca[cpu].thread_idle_state = PNV_THREAD_RUNNING;
>   paca[cpu].thread_mask = 1 << j;
> - if (!cpu_has_feature(CPU_FTR_POWER9_DD1))
> - continue;
> - paca[cpu].thread_sibling_pacas =
> - kmalloc_node(paca_ptr_array_size,
> -     GFP_KERNEL, node);
>   }
>   }
>  
> @@ -813,28 +808,6 @@ static int __init pnv_init_idle_states(void)
>  
>   pnv_alloc_idle_core_states();
>  
> - /*
> - * For each CPU, record its PACA address in each of it's
> - * sibling thread's PACA at the slot corresponding to this
> - * CPU's index in the core.
> - */
> - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
> - int cpu;
> -
> - pr_info("powernv: idle: Saving PACA pointers of all CPUs in their thread sibling PACA\n");
> - for_each_present_cpu(cpu) {
> - int base_cpu = cpu_first_thread_sibling(cpu);
> - int idx = cpu_thread_in_core(cpu);
> - int i;
> -
> - for (i = 0; i < threads_per_core; i++) {
> - int j = base_cpu + i;
> -
> - paca[j].thread_sibling_pacas[idx] = &paca[cpu];
> - }
> - }
> - }
> -
>   if (supported_cpuidle_states & OPAL_PM_NAP_ENABLED)
>   ppc_md.power_save = power7_idle;
>  
> diff --git a/arch/powerpc/platforms/powernv/smp.c b/arch/powerpc/platforms/powernv/smp.c
> index 9664c84..f7dec55 100644
> --- a/arch/powerpc/platforms/powernv/smp.c
> +++ b/arch/powerpc/platforms/powernv/smp.c
> @@ -283,23 +283,6 @@ static void pnv_cause_ipi(int cpu)
>   ic_cause_ipi(cpu);
>  }
>  
> -static void pnv_p9_dd1_cause_ipi(int cpu)
> -{
> - int this_cpu = get_cpu();
> -
> - /*
> - * POWER9 DD1 has a global addressed msgsnd, but for now we restrict
> - * IPIs to same core, because it requires additional synchronization
> - * for inter-core doorbells which we do not implement.
> - */
> - if (cpumask_test_cpu(cpu, cpu_sibling_mask(this_cpu)))
> - doorbell_global_ipi(cpu);
> - else
> - ic_cause_ipi(cpu);
> -
> - put_cpu();
> -}
> -
>  static void __init pnv_smp_probe(void)
>  {
>   if (xive_enabled())
> @@ -311,14 +294,10 @@ static void __init pnv_smp_probe(void)
>   ic_cause_ipi = smp_ops->cause_ipi;
>   WARN_ON(!ic_cause_ipi);
>  
> - if (cpu_has_feature(CPU_FTR_ARCH_300)) {
> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
> - smp_ops->cause_ipi = pnv_p9_dd1_cause_ipi;
> - else
> - smp_ops->cause_ipi = doorbell_global_ipi;
> - } else {
> + if (cpu_has_feature(CPU_FTR_ARCH_300))
> + smp_ops->cause_ipi = doorbell_global_ipi;
> + else
>   smp_ops->cause_ipi = pnv_cause_ipi;
> - }
>   }
>  }
>  
> diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c
> index a3b8d7d..82cc999 100644
> --- a/arch/powerpc/sysdev/xive/common.c
> +++ b/arch/powerpc/sysdev/xive/common.c
> @@ -319,7 +319,7 @@ void xive_do_source_eoi(u32 hw_irq, struct xive_irq_data *xd)
>   * The FW told us to call it. This happens for some
>   * interrupt sources that need additional HW whacking
>   * beyond the ESB manipulation. For example LPC interrupts
> - * on P9 DD1.0 need a latch to be clared in the LPC bridge
> + * on P9 DD1.0 needed a latch to be clared in the LPC bridge
>   * itself. The Firmware will take care of it.
>   */
>   if (WARN_ON_ONCE(!xive_ops->eoi))
> @@ -337,9 +337,9 @@ void xive_do_source_eoi(u32 hw_irq, struct xive_irq_data *xd)
>   * This allows us to then do a re-trigger if Q was set
>   * rather than synthesizing an interrupt in software
>   *
> - * For LSIs, using the HW EOI cycle works around a problem
> - * on P9 DD1 PHBs where the other ESB accesses don't work
> - * properly.
> + * For LSIs the HW EOI cycle is used rather than PQ bits,
> + * as they are automatically re-triggred in HW when still
> + * pending.
>   */
>   if (xd->flags & XIVE_IRQ_FLAG_LSI)
>   xive_esb_read(xd, XIVE_ESB_LOAD_EOI);
> diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
> index 8a57ff1..c6156b6 100644
> --- a/drivers/misc/cxl/cxl.h
> +++ b/drivers/misc/cxl/cxl.h
> @@ -865,14 +865,6 @@ static inline bool cxl_is_power9(void)
>   return false;
>  }
>  
> -static inline bool cxl_is_power9_dd1(void)
> -{
> - if ((pvr_version_is(PVR_POWER9)) &&
> -    cpu_has_feature(CPU_FTR_POWER9_DD1))
> - return true;
> - return false;
> -}
> -
>  ssize_t cxl_pci_afu_read_err_buffer(struct cxl_afu *afu, char *buf,
>   loff_t off, size_t count);
>  
> diff --git a/drivers/misc/cxl/cxllib.c b/drivers/misc/cxl/cxllib.c
> index 0bc7c31..5a3f912 100644
> --- a/drivers/misc/cxl/cxllib.c
> +++ b/drivers/misc/cxl/cxllib.c
> @@ -102,10 +102,6 @@ int cxllib_get_xsl_config(struct pci_dev *dev, struct cxllib_xsl_config *cfg)
>   rc = cxl_get_xsl9_dsnctl(dev, capp_unit_id, &cfg->dsnctl);
>   if (rc)
>   return rc;
> - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
> - /* workaround for DD1 - nbwind = capiind */
> - cfg->dsnctl |= ((u64)0x02 << (63-47));
> - }
>  
>   cfg->version  = CXL_XSL_CONFIG_CURRENT_VERSION;
>   cfg->log_bar_size = CXL_CAPI_WINDOW_LOG_SIZE;
> diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
> index 429d6de..2af0d4c 100644
> --- a/drivers/misc/cxl/pci.c
> +++ b/drivers/misc/cxl/pci.c
> @@ -465,23 +465,21 @@ int cxl_get_xsl9_dsnctl(struct pci_dev *dev, u64 capp_unit_id, u64 *reg)
>   /* nMMU_ID Defaults to: b’000001001’*/
>   xsl_dsnctl |= ((u64)0x09 << (63-28));
>  
> - if (!(cxl_is_power9_dd1())) {
> - /*
> - * Used to identify CAPI packets which should be sorted into
> - * the Non-Blocking queues by the PHB. This field should match
> - * the PHB PBL_NBW_CMPM register
> - * nbwind=0x03, bits [57:58], must include capi indicator.
> - * Not supported on P9 DD1.
> - */
> - xsl_dsnctl |= (nbwind << (63-55));
> + /*
> + * Used to identify CAPI packets which should be sorted into
> + * the Non-Blocking queues by the PHB. This field should match
> + * the PHB PBL_NBW_CMPM register
> + * nbwind=0x03, bits [57:58], must include capi indicator.
> + * Not supported on P9 DD1.
> + */
> + xsl_dsnctl |= (nbwind << (63-55));
>  
> - /*
> - * Upper 16b address bits of ASB_Notify messages sent to the
> - * system. Need to match the PHB’s ASN Compare/Mask Register.
> - * Not supported on P9 DD1.
> - */
> - xsl_dsnctl |= asnind;
> - }
> + /*
> + * Upper 16b address bits of ASB_Notify messages sent to the
> + * system. Need to match the PHB’s ASN Compare/Mask Register.
> + * Not supported on P9 DD1.
> + */
> + xsl_dsnctl |= asnind;
>  
>   *reg = xsl_dsnctl;
>   return 0;
> @@ -539,15 +537,8 @@ static int init_implementation_adapter_regs_psl9(struct cxl *adapter,
>   /* Snoop machines */
>   cxl_p1_write(adapter, CXL_PSL9_APCDEDALLOC, 0x800F000200000000ULL);
>  
> - if (cxl_is_power9_dd1()) {
> - /* Disabling deadlock counter CAR */
> - cxl_p1_write(adapter, CXL_PSL9_GP_CT, 0x0020000000000001ULL);
> - /* Enable NORST */
> - cxl_p1_write(adapter, CXL_PSL9_DEBUG, 0x8000000000000000ULL);
> - } else {
> - /* Enable NORST and DD2 features */
> - cxl_p1_write(adapter, CXL_PSL9_DEBUG, 0xC000000000000000ULL);
> - }
> + /* Enable NORST and DD2 features */
> + cxl_p1_write(adapter, CXL_PSL9_DEBUG, 0xC000000000000000ULL);
>  
>   /*
>   * Check if PSL has data-cache. We need to flush adapter datacache
>


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team

signature.asc (836 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [SRU][Bionic][Cosmic][PATCH 1/3] powerpc/64s: Remove POWER9 DD1 support

Joseph Salisbury-3
On 10/12/2018 10:39 AM, Stefan Bader wrote:

> On 11.10.2018 18:22, Joseph Salisbury wrote:
>> From: Nicholas Piggin <[hidden email]>
>>
>> BugLink: https://bugs.launchpad.net/bugs/1792195
>>
>> POWER9 DD1 was never a product. It is no longer supported by upstream
>> firmware, and it is not effectively supported in Linux due to lack of
>> testing.
> I am not really happy to see such a large portion of code getting ripped out
> *after* release. One never knows whether there really was nothing that did not
> still make use of some parts...
>
> Is this part really strictly required for Bionic?
>
> -Stefan
2bf1071a8d5 is needed as a prereq for f08d08f3db5545.  IBM and I will
investigate if it is absolutely required, or if a backport of
f08d08f3db5545 can be done instead.

>
>> Signed-off-by: Nicholas Piggin <[hidden email]>
>> Reviewed-by: Michael Ellerman <[hidden email]>
>> [mpe: Remove arch_make_huge_pte() entirely]
>> Signed-off-by: Michael Ellerman <[hidden email]>
>> (backported from commit 2bf1071a8d50928a4ae366bb3108833166c2b70c)
>> Signed-off-by: Michael Ranweiler <[hidden email]>
>> Signed-off-by: Joseph Salisbury <[hidden email]>
>> ---
>>  arch/powerpc/include/asm/book3s/64/hugetlb.h       | 20 --------
>>  arch/powerpc/include/asm/book3s/64/pgtable.h       |  5 +-
>>  arch/powerpc/include/asm/book3s/64/radix.h         | 35 ++-----------
>>  .../powerpc/include/asm/book3s/64/tlbflush-radix.h |  2 -
>>  arch/powerpc/include/asm/cputable.h                |  6 +--
>>  arch/powerpc/include/asm/paca.h                    |  5 --
>>  arch/powerpc/kernel/asm-offsets.c                  |  1 -
>>  arch/powerpc/kernel/cputable.c                     | 20 --------
>>  arch/powerpc/kernel/dt_cpu_ftrs.c                  | 13 +++--
>>  arch/powerpc/kernel/exceptions-64s.S               |  4 +-
>>  arch/powerpc/kernel/idle_book3s.S                  | 50 ------------------
>>  arch/powerpc/kernel/process.c                      | 10 +---
>>  arch/powerpc/kvm/book3s_64_mmu_radix.c             | 15 +-----
>>  arch/powerpc/kvm/book3s_hv.c                       | 10 ----
>>  arch/powerpc/kvm/book3s_hv_rmhandlers.S            | 16 +-----
>>  arch/powerpc/kvm/book3s_xive_template.c            | 39 +++++---------
>>  arch/powerpc/mm/hash_utils_64.c                    | 30 -----------
>>  arch/powerpc/mm/hugetlbpage.c                      |  9 ++--
>>  arch/powerpc/mm/mmu_context_book3s64.c             | 12 +----
>>  arch/powerpc/mm/pgtable-radix.c                    | 60 +---------------------
>>  arch/powerpc/mm/tlb-radix.c                        | 18 -------
>>  arch/powerpc/perf/core-book3s.c                    | 34 ------------
>>  arch/powerpc/perf/isa207-common.c                  | 12 ++---
>>  arch/powerpc/perf/isa207-common.h                  |  5 --
>>  arch/powerpc/perf/power9-pmu.c                     | 54 +------------------
>>  arch/powerpc/platforms/powernv/idle.c              | 27 ----------
>>  arch/powerpc/platforms/powernv/smp.c               | 27 ++--------
>>  arch/powerpc/sysdev/xive/common.c                  |  8 +--
>>  drivers/misc/cxl/cxl.h                             |  8 ---
>>  drivers/misc/cxl/cxllib.c                          |  4 --
>>  drivers/misc/cxl/pci.c                             | 41 ++++++---------
>>  31 files changed, 70 insertions(+), 530 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/book3s/64/hugetlb.h b/arch/powerpc/include/asm/book3s/64/hugetlb.h
>> index c459f93..5088838 100644
>> --- a/arch/powerpc/include/asm/book3s/64/hugetlb.h
>> +++ b/arch/powerpc/include/asm/book3s/64/hugetlb.h
>> @@ -32,26 +32,6 @@ static inline int hstate_get_psize(struct hstate *hstate)
>>   }
>>  }
>>  
>> -#define arch_make_huge_pte arch_make_huge_pte
>> -static inline pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
>> -       struct page *page, int writable)
>> -{
>> - unsigned long page_shift;
>> -
>> - if (!cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - return entry;
>> -
>> - page_shift = huge_page_shift(hstate_vma(vma));
>> - /*
>> - * We don't support 1G hugetlb pages yet.
>> - */
>> - VM_WARN_ON(page_shift == mmu_psize_defs[MMU_PAGE_1G].shift);
>> - if (page_shift == mmu_psize_defs[MMU_PAGE_2M].shift)
>> - return __pte(pte_val(entry) | R_PAGE_LARGE);
>> - else
>> - return entry;
>> -}
>> -
>>  #ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE
>>  static inline bool gigantic_page_supported(void)
>>  {
>> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> index bddf18a..674990c 100644
>> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
>> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> @@ -454,9 +454,8 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
>>  {
>>   if (full && radix_enabled()) {
>>   /*
>> - * Let's skip the DD1 style pte update here. We know that
>> - * this is a full mm pte clear and hence can be sure there is
>> - * no parallel set_pte.
>> + * We know that this is a full mm pte clear and
>> + * hence can be sure there is no parallel set_pte.
>>   */
>>   return radix__ptep_get_and_clear_full(mm, addr, ptep, full);
>>   }
>> diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h
>> index 2509344..eaa4591 100644
>> --- a/arch/powerpc/include/asm/book3s/64/radix.h
>> +++ b/arch/powerpc/include/asm/book3s/64/radix.h
>> @@ -12,12 +12,6 @@
>>  #include <asm/book3s/64/radix-4k.h>
>>  #endif
>>  
>> -/*
>> - * For P9 DD1 only, we need to track whether the pte's huge.
>> - */
>> -#define R_PAGE_LARGE _RPAGE_RSV1
>> -
>> -
>>  #ifndef __ASSEMBLY__
>>  #include <asm/book3s/64/tlbflush-radix.h>
>>  #include <asm/cpu_has_feature.h>
>> @@ -153,20 +147,7 @@ static inline unsigned long radix__pte_update(struct mm_struct *mm,
>>  {
>>   unsigned long old_pte;
>>  
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
>> -
>> - unsigned long new_pte;
>> -
>> - old_pte = __radix_pte_update(ptep, ~0ul, 0);
>> - /*
>> - * new value of pte
>> - */
>> - new_pte = (old_pte | set) & ~clr;
>> - radix__flush_tlb_pte_p9_dd1(old_pte, mm, addr);
>> - if (new_pte)
>> - __radix_pte_update(ptep, 0, new_pte);
>> - } else
>> - old_pte = __radix_pte_update(ptep, clr, set);
>> + old_pte = __radix_pte_update(ptep, clr, set);
>>   if (!huge)
>>   assert_pte_locked(mm, addr);
>>  
>> @@ -241,8 +222,6 @@ static inline int radix__pmd_trans_huge(pmd_t pmd)
>>  
>>  static inline pmd_t radix__pmd_mkhuge(pmd_t pmd)
>>  {
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - return __pmd(pmd_val(pmd) | _PAGE_PTE | R_PAGE_LARGE);
>>   return __pmd(pmd_val(pmd) | _PAGE_PTE);
>>  }
>>  static inline void radix__pmdp_huge_split_prepare(struct vm_area_struct *vma,
>> @@ -279,18 +258,14 @@ static inline unsigned long radix__get_tree_size(void)
>>   unsigned long rts_field;
>>   /*
>>   * We support 52 bits, hence:
>> - *  DD1    52-28 = 24, 0b11000
>> - *  Others 52-31 = 21, 0b10101
>> + * bits 52 - 31 = 21, 0b10101
>>   * RTS encoding details
>>   * bits 0 - 3 of rts -> bits 6 - 8 unsigned long
>>   * bits 4 - 5 of rts -> bits 62 - 63 of unsigned long
>>   */
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - rts_field = (0x3UL << 61);
>> - else {
>> - rts_field = (0x5UL << 5); /* 6 - 8 bits */
>> - rts_field |= (0x2UL << 61);
>> - }
>> + rts_field = (0x5UL << 5); /* 6 - 8 bits */
>> + rts_field |= (0x2UL << 61);
>> +
>>   return rts_field;
>>  }
>>  
>> diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h b/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
>> index 6a9e680..a0fe684 100644
>> --- a/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
>> +++ b/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
>> @@ -45,6 +45,4 @@ extern void radix__flush_tlb_lpid_va(unsigned long lpid, unsigned long gpa,
>>       unsigned long page_size);
>>  extern void radix__flush_tlb_lpid(unsigned long lpid);
>>  extern void radix__flush_tlb_all(void);
>> -extern void radix__flush_tlb_pte_p9_dd1(unsigned long old_pte, struct mm_struct *mm,
>> - unsigned long address);
>>  #endif
>> diff --git a/arch/powerpc/include/asm/cputable.h b/arch/powerpc/include/asm/cputable.h
>> index 82ca727..aab3b68 100644
>> --- a/arch/powerpc/include/asm/cputable.h
>> +++ b/arch/powerpc/include/asm/cputable.h
>> @@ -222,7 +222,6 @@ enum {
>>  #define CPU_FTR_DAWR LONG_ASM_CONST(0x0000008000000000)
>>  #define CPU_FTR_DABRX LONG_ASM_CONST(0x0000010000000000)
>>  #define CPU_FTR_PMAO_BUG LONG_ASM_CONST(0x0000020000000000)
>> -#define CPU_FTR_POWER9_DD1 LONG_ASM_CONST(0x0000040000000000)
>>  #define CPU_FTR_POWER9_DD2_1 LONG_ASM_CONST(0x0000080000000000)
>>  #define CPU_FTR_P9_TM_HV_ASSIST LONG_ASM_CONST(0x0000100000000000)
>>  #define CPU_FTR_P9_TM_XER_SO_BUG LONG_ASM_CONST(0x0000200000000000)
>> @@ -480,8 +479,6 @@ enum {
>>      CPU_FTR_DBELL | CPU_FTR_HAS_PPR | CPU_FTR_DAWR | \
>>      CPU_FTR_ARCH_207S | CPU_FTR_TM_COMP | CPU_FTR_ARCH_300 | \
>>      CPU_FTR_P9_TLBIE_BUG | CPU_FTR_P9_TIDR)
>> -#define CPU_FTRS_POWER9_DD1 ((CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD1) & \
>> -     (~CPU_FTR_SAO))
>>  #define CPU_FTRS_POWER9_DD2_0 CPU_FTRS_POWER9
>>  #define CPU_FTRS_POWER9_DD2_1 (CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD2_1)
>>  #define CPU_FTRS_POWER9_DD2_2 (CPU_FTRS_POWER9 | CPU_FTR_P9_TM_HV_ASSIST | \
>> @@ -505,8 +502,7 @@ enum {
>>       CPU_FTRS_POWER6 | CPU_FTRS_POWER7 | CPU_FTRS_POWER8E | \
>>       CPU_FTRS_POWER8 | CPU_FTRS_POWER8_DD1 | CPU_FTRS_CELL | \
>>       CPU_FTRS_PA6T | CPU_FTR_VSX | CPU_FTRS_POWER9 | \
>> -     CPU_FTRS_POWER9_DD1 | CPU_FTRS_POWER9_DD2_1 | \
>> -     CPU_FTRS_POWER9_DD2_2)
>> +     CPU_FTRS_POWER9_DD2_1 | CPU_FTRS_POWER9_DD2_2)
>>  #endif
>>  #else
>>  enum {
>> diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
>> index b3ec196..da6a25f 100644
>> --- a/arch/powerpc/include/asm/paca.h
>> +++ b/arch/powerpc/include/asm/paca.h
>> @@ -184,11 +184,6 @@ struct paca_struct {
>>   u8 subcore_sibling_mask;
>>   /* Flag to request this thread not to stop */
>>   atomic_t dont_stop;
>> - /*
>> - * Pointer to an array which contains pointer
>> - * to the sibling threads' paca.
>> - */
>> - struct paca_struct **thread_sibling_pacas;
>>   /* The PSSCR value that the kernel requested before going to stop */
>>   u64 requested_psscr;
>>  
>> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
>> index a65c54c..7e1cbc8 100644
>> --- a/arch/powerpc/kernel/asm-offsets.c
>> +++ b/arch/powerpc/kernel/asm-offsets.c
>> @@ -754,7 +754,6 @@ int main(void)
>>   OFFSET(PACA_THREAD_IDLE_STATE, paca_struct, thread_idle_state);
>>   OFFSET(PACA_THREAD_MASK, paca_struct, thread_mask);
>>   OFFSET(PACA_SUBCORE_SIBLING_MASK, paca_struct, subcore_sibling_mask);
>> - OFFSET(PACA_SIBLING_PACA_PTRS, paca_struct, thread_sibling_pacas);
>>   OFFSET(PACA_REQ_PSSCR, paca_struct, requested_psscr);
>>   OFFSET(PACA_DONT_STOP, paca_struct, dont_stop);
>>  #define STOP_SPR(x, f) OFFSET(x, paca_struct, stop_sprs.f)
>> diff --git a/arch/powerpc/kernel/cputable.c b/arch/powerpc/kernel/cputable.c
>> index bc2b461..13acd1c 100644
>> --- a/arch/powerpc/kernel/cputable.c
>> +++ b/arch/powerpc/kernel/cputable.c
>> @@ -527,26 +527,6 @@ static struct cpu_spec __initdata cpu_specs[] = {
>>   .machine_check_early = __machine_check_early_realmode_p8,
>>   .platform = "power8",
>>   },
>> - { /* Power9 DD1*/
>> - .pvr_mask = 0xffffff00,
>> - .pvr_value = 0x004e0100,
>> - .cpu_name = "POWER9 (raw)",
>> - .cpu_features = CPU_FTRS_POWER9_DD1,
>> - .cpu_user_features = COMMON_USER_POWER9,
>> - .cpu_user_features2 = COMMON_USER2_POWER9,
>> - .mmu_features = MMU_FTRS_POWER9,
>> - .icache_bsize = 128,
>> - .dcache_bsize = 128,
>> - .num_pmcs = 6,
>> - .pmc_type = PPC_PMC_IBM,
>> - .oprofile_cpu_type = "ppc64/power9",
>> - .oprofile_type = PPC_OPROFILE_INVALID,
>> - .cpu_setup = __setup_cpu_power9,
>> - .cpu_restore = __restore_cpu_power9,
>> - .flush_tlb = __flush_tlb_power9,
>> - .machine_check_early = __machine_check_early_realmode_p9,
>> - .platform = "power9",
>> - },
>>   { /* Power9 DD2.0 */
>>   .pvr_mask = 0xffffefff,
>>   .pvr_value = 0x004e0200,
>> diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
>> index fa7f063..350ea04 100644
>> --- a/arch/powerpc/kernel/dt_cpu_ftrs.c
>> +++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
>> @@ -741,13 +741,16 @@ static __init void cpufeatures_cpu_quirks(void)
>>   /*
>>   * Not all quirks can be derived from the cpufeatures device tree.
>>   */
>> - if ((version & 0xffffff00) == 0x004e0100)
>> - cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD1;
>> + if ((version & 0xffffefff) == 0x004e0200)
>> + ; /* DD2.0 has no feature flag */
>>   else if ((version & 0xffffefff) == 0x004e0201)
>>   cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1;
>> - else if ((version & 0xffffefff) == 0x004e0202)
>> - cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_HV_ASSIST |
>> - CPU_FTR_P9_TM_XER_SO_BUG;
>> + else if ((version & 0xffffefff) == 0x004e0202) {
>> + cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_HV_ASSIST;
>> + cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_XER_SO_BUG;
>> + cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1;
>> + } else /* DD2.1 and up have DD2_1 */
>> + cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1;
>>  
>>   if ((version & 0xffff0000) == 0x004e0000) {
>>   cur_cpu_spec->cpu_features |= CPU_FTR_P9_TLBIE_BUG;
>> diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
>> index 59f5cfa..724bd35 100644
>> --- a/arch/powerpc/kernel/exceptions-64s.S
>> +++ b/arch/powerpc/kernel/exceptions-64s.S
>> @@ -276,9 +276,7 @@ BEGIN_FTR_SECTION
>>   *
>>   * This interrupt can wake directly from idle. If that is the case,
>>   * the machine check is handled then the idle wakeup code is called
>> - * to restore state. In that case, the POWER9 DD1 idle PACA workaround
>> - * is not applied in the early machine check code, which will cause
>> - * bugs.
>> + * to restore state.
>>   */
>>   mr r11,r1 /* Save r1 */
>>   lhz r10,PACA_IN_MCE(r13)
>> diff --git a/arch/powerpc/kernel/idle_book3s.S b/arch/powerpc/kernel/idle_book3s.S
>> index f3ac31c..49439fc 100644
>> --- a/arch/powerpc/kernel/idle_book3s.S
>> +++ b/arch/powerpc/kernel/idle_book3s.S
>> @@ -455,43 +455,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_P9_TM_XER_SO_BUG)
>>   blr /* return 0 for wakeup cause / SRR1 value */
>>  
>>  /*
>> - * On waking up from stop 0,1,2 with ESL=1 on POWER9 DD1,
>> - * HSPRG0 will be set to the HSPRG0 value of one of the
>> - * threads in this core. Thus the value we have in r13
>> - * may not be this thread's paca pointer.
>> - *
>> - * Fortunately, the TIR remains invariant. Since this thread's
>> - * paca pointer is recorded in all its sibling's paca, we can
>> - * correctly recover this thread's paca pointer if we
>> - * know the index of this thread in the core.
>> - *
>> - * This index can be obtained from the TIR.
>> - *
>> - * i.e, thread's position in the core = TIR.
>> - * If this value is i, then this thread's paca is
>> - * paca->thread_sibling_pacas[i].
>> - */
>> -power9_dd1_recover_paca:
>> - mfspr r4, SPRN_TIR
>> - /*
>> - * Since each entry in thread_sibling_pacas is 8 bytes
>> - * we need to left-shift by 3 bits. Thus r4 = i * 8
>> - */
>> - sldi r4, r4, 3
>> - /* Get &paca->thread_sibling_pacas[0] in r5 */
>> - ld r5, PACA_SIBLING_PACA_PTRS(r13)
>> - /* Load paca->thread_sibling_pacas[i] into r13 */
>> - ldx r13, r4, r5
>> - SET_PACA(r13)
>> - /*
>> - * Indicate that we have lost NVGPR state
>> - * which needs to be restored from the stack.
>> - */
>> - li r3, 1
>> - stb r3,PACA_NAPSTATELOST(r13)
>> - blr
>> -
>> -/*
>>   * Called from machine check handler for powersave wakeups.
>>   * Low level machine check processing has already been done. Now just
>>   * go through the wake up path to get everything in order.
>> @@ -525,9 +488,6 @@ pnv_powersave_wakeup:
>>   ld r2, PACATOC(r13)
>>  
>>  BEGIN_FTR_SECTION
>> -BEGIN_FTR_SECTION_NESTED(70)
>> - bl power9_dd1_recover_paca
>> -END_FTR_SECTION_NESTED_IFSET(CPU_FTR_POWER9_DD1, 70)
>>   bl pnv_restore_hyp_resource_arch300
>>  FTR_SECTION_ELSE
>>   bl pnv_restore_hyp_resource_arch207
>> @@ -587,22 +547,12 @@ END_FTR_SECTION_IFCLR(CPU_FTR_POWER9_DD2_1)
>>   LOAD_REG_ADDRBASE(r5,pnv_first_deep_stop_state)
>>   ld r4,ADDROFF(pnv_first_deep_stop_state)(r5)
>>  
>> -BEGIN_FTR_SECTION_NESTED(71)
>> - /*
>> - * Assume that we are waking up from the state
>> - * same as the Requested Level (RL) in the PSSCR
>> - * which are Bits 60-63
>> - */
>> - ld r5,PACA_REQ_PSSCR(r13)
>> - rldicl  r5,r5,0,60
>> -FTR_SECTION_ELSE_NESTED(71)
>>   /*
>>   * 0-3 bits correspond to Power-Saving Level Status
>>   * which indicates the idle state we are waking up from
>>   */
>>   mfspr r5, SPRN_PSSCR
>>   rldicl  r5,r5,4,60
>> -ALT_FTR_SECTION_END_NESTED_IFSET(CPU_FTR_POWER9_DD1, 71)
>>   li r0, 0 /* clear requested_psscr to say we're awake */
>>   std r0, PACA_REQ_PSSCR(r13)
>>   cmpd cr4,r5,r4
>> diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
>> index 83478a9..e73a80d 100644
>> --- a/arch/powerpc/kernel/process.c
>> +++ b/arch/powerpc/kernel/process.c
>> @@ -1247,17 +1247,9 @@ struct task_struct *__switch_to(struct task_struct *prev,
>>   * mappings. If the new process has the foreign real address
>>   * mappings, we must issue a cp_abort to clear any state and
>>   * prevent snooping, corruption or a covert channel.
>> - *
>> - * DD1 allows paste into normal system memory so we do an
>> - * unpaired copy, rather than cp_abort, to clear the buffer,
>> - * since cp_abort is quite expensive.
>>   */
>> - if (current_thread_info()->task->thread.used_vas) {
>> + if (current_thread_info()->task->thread.used_vas)
>>   asm volatile(PPC_CP_ABORT);
>> - } else if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
>> - asm volatile(PPC_COPY(%0, %1)
>> - : : "r"(dummy_copy_buffer), "r"(0));
>> - }
>>   }
>>  #endif /* CONFIG_PPC_BOOK3S_64 */
>>  
>> diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
>> index 5d9bafe..dd8980f 100644
>> --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
>> +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
>> @@ -66,10 +66,7 @@ int kvmppc_mmu_radix_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
>>   bits = root & RPDS_MASK;
>>   root = root & RPDB_MASK;
>>  
>> - /* P9 DD1 interprets RTS (radix tree size) differently */
>>   offset = rts + 31;
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - offset -= 3;
>>  
>>   /* current implementations only support 52-bit space */
>>   if (offset != 52)
>> @@ -167,17 +164,7 @@ unsigned long kvmppc_radix_update_pte(struct kvm *kvm, pte_t *ptep,
>>        unsigned long clr, unsigned long set,
>>        unsigned long addr, unsigned int shift)
>>  {
>> - unsigned long old = 0;
>> -
>> - if (!(clr & _PAGE_PRESENT) && cpu_has_feature(CPU_FTR_POWER9_DD1) &&
>> -    pte_present(*ptep)) {
>> - /* have to invalidate it first */
>> - old = __radix_pte_update(ptep, _PAGE_PRESENT, 0);
>> - kvmppc_radix_tlbie_page(kvm, addr, shift);
>> - set |= _PAGE_PRESENT;
>> - old &= _PAGE_PRESENT;
>> - }
>> - return __radix_pte_update(ptep, clr, set) | old;
>> + return __radix_pte_update(ptep, clr, set);
>>  }
>>  
>>  void kvmppc_radix_set_pte_at(struct kvm *kvm, unsigned long addr,
>> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
>> index dc9eb6b..51278f8 100644
>> --- a/arch/powerpc/kvm/book3s_hv.c
>> +++ b/arch/powerpc/kvm/book3s_hv.c
>> @@ -1662,14 +1662,6 @@ static int kvmppc_set_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
>>   r = set_vpa(vcpu, &vcpu->arch.dtl, addr, len);
>>   break;
>>   case KVM_REG_PPC_TB_OFFSET:
>> - /*
>> - * POWER9 DD1 has an erratum where writing TBU40 causes
>> - * the timebase to lose ticks.  So we don't let the
>> - * timebase offset be changed on P9 DD1.  (It is
>> - * initialized to zero.)
>> - */
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - break;
>>   /* round up to multiple of 2^24 */
>>   vcpu->arch.vcore->tb_offset =
>>   ALIGN(set_reg_val(id, *val), 1UL << 24);
>> @@ -1987,8 +1979,6 @@ static struct kvm_vcpu *kvmppc_core_vcpu_create_hv(struct kvm *kvm,
>>   /*
>>   * Set the default HFSCR for the guest from the host value.
>>   * This value is only used on POWER9.
>> - * On POWER9 DD1, TM doesn't work, so we make sure to
>> - * prevent the guest from using it.
>>   * On POWER9, we want to virtualize the doorbell facility, so we
>>   * turn off the HFSCR bit, which causes those instructions to trap.
>>   */
>> diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> index 293a659..1c35836 100644
>> --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> @@ -907,9 +907,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
>>   mtspr SPRN_PID, r7
>>   mtspr SPRN_WORT, r8
>>  BEGIN_FTR_SECTION
>> - PPC_INVALIDATE_ERAT
>> -END_FTR_SECTION_IFSET(CPU_FTR_POWER9_DD1)
>> -BEGIN_FTR_SECTION
>>   /* POWER8-only registers */
>>   ld r5, VCPU_TCSCR(r4)
>>   ld r6, VCPU_ACOP(r4)
>> @@ -1849,7 +1846,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
>>   ld r5, VCPU_KVM(r9)
>>   lbz r0, KVM_RADIX(r5)
>>   cmpwi cr2, r0, 0
>> - beq cr2, 4f
>> + beq cr2, 2f
>>  
>>   /* Radix: Handle the case where the guest used an illegal PID */
>>   LOAD_REG_ADDR(r4, mmu_base_pid)
>> @@ -1881,11 +1878,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
>>   bdnz 1b
>>   ptesync
>>  
>> -2: /* Flush the ERAT on radix P9 DD1 guest exit */
>> -BEGIN_FTR_SECTION
>> - PPC_INVALIDATE_ERAT
>> -END_FTR_SECTION_IFSET(CPU_FTR_POWER9_DD1)
>> -4:
>> +2:
>>  #endif /* CONFIG_PPC_RADIX_MMU */
>>  
>>   /*
>> @@ -3432,11 +3425,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300)
>>   mtspr SPRN_CIABR, r0
>>   mtspr SPRN_DAWRX, r0
>>  
>> - /* Flush the ERAT on radix P9 DD1 guest exit */
>> -BEGIN_FTR_SECTION
>> - PPC_INVALIDATE_ERAT
>> -END_FTR_SECTION_IFSET(CPU_FTR_POWER9_DD1)
>> -
>>  BEGIN_MMU_FTR_SECTION
>>   b 4f
>>  END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_RADIX)
>> diff --git a/arch/powerpc/kvm/book3s_xive_template.c b/arch/powerpc/kvm/book3s_xive_template.c
>> index c7a5dea..3191961 100644
>> --- a/arch/powerpc/kvm/book3s_xive_template.c
>> +++ b/arch/powerpc/kvm/book3s_xive_template.c
>> @@ -22,18 +22,6 @@ static void GLUE(X_PFX,ack_pending)(struct kvmppc_xive_vcpu *xc)
>>   */
>>   eieio();
>>  
>> - /*
>> - * DD1 bug workaround: If PIPR is less favored than CPPR
>> - * ignore the interrupt or we might incorrectly lose an IPB
>> - * bit.
>> - */
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
>> - __be64 qw1 = __x_readq(__x_tima + TM_QW1_OS);
>> - u8 pipr = be64_to_cpu(qw1) & 0xff;
>> - if (pipr >= xc->hw_cppr)
>> - return;
>> - }
>> -
>>   /* Perform the acknowledge OS to register cycle. */
>>   ack = be16_to_cpu(__x_readw(__x_tima + TM_SPC_ACK_OS_REG));
>>  
>> @@ -86,8 +74,15 @@ static void GLUE(X_PFX,source_eoi)(u32 hw_irq, struct xive_irq_data *xd)
>>   /* If the XIVE supports the new "store EOI facility, use it */
>>   if (xd->flags & XIVE_IRQ_FLAG_STORE_EOI)
>>   __x_writeq(0, __x_eoi_page(xd) + XIVE_ESB_STORE_EOI);
>> - else if (hw_irq && xd->flags & XIVE_IRQ_FLAG_EOI_FW) {
>> + else if (hw_irq && xd->flags & XIVE_IRQ_FLAG_EOI_FW)
>>   opal_int_eoi(hw_irq);
>> + else if (xd->flags & XIVE_IRQ_FLAG_LSI) {
>> + /*
>> + * For LSIs the HW EOI cycle is used rather than PQ bits,
>> + * as they are automatically re-triggred in HW when still
>> + * pending.
>> + */
>> + __x_readq(__x_eoi_page(xd) + XIVE_ESB_LOAD_EOI);
>>   } else {
>>   uint64_t eoi_val;
>>  
>> @@ -99,20 +94,12 @@ static void GLUE(X_PFX,source_eoi)(u32 hw_irq, struct xive_irq_data *xd)
>>   *
>>   * This allows us to then do a re-trigger if Q was set
>>   * rather than synthetizing an interrupt in software
>> - *
>> - * For LSIs, using the HW EOI cycle works around a problem
>> - * on P9 DD1 PHBs where the other ESB accesses don't work
>> - * properly.
>>   */
>> - if (xd->flags & XIVE_IRQ_FLAG_LSI)
>> - __x_readq(__x_eoi_page(xd) + XIVE_ESB_LOAD_EOI);
>> - else {
>> - eoi_val = GLUE(X_PFX,esb_load)(xd, XIVE_ESB_SET_PQ_00);
>> -
>> - /* Re-trigger if needed */
>> - if ((eoi_val & 1) && __x_trig_page(xd))
>> - __x_writeq(0, __x_trig_page(xd));
>> - }
>> + eoi_val = GLUE(X_PFX,esb_load)(xd, XIVE_ESB_SET_PQ_00);
>> +
>> + /* Re-trigger if needed */
>> + if ((eoi_val & 1) && __x_trig_page(xd))
>> + __x_writeq(0, __x_trig_page(xd));
>>   }
>>  }
>>  
>> diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
>> index db84680..06574b4 100644
>> --- a/arch/powerpc/mm/hash_utils_64.c
>> +++ b/arch/powerpc/mm/hash_utils_64.c
>> @@ -802,31 +802,6 @@ int hash__remove_section_mapping(unsigned long start, unsigned long end)
>>  }
>>  #endif /* CONFIG_MEMORY_HOTPLUG */
>>  
>> -static void update_hid_for_hash(void)
>> -{
>> - unsigned long hid0;
>> - unsigned long rb = 3UL << PPC_BITLSHIFT(53); /* IS = 3 */
>> -
>> - asm volatile("ptesync": : :"memory");
>> - /* prs = 0, ric = 2, rs = 0, r = 1 is = 3 */
>> - asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
>> -     : : "r"(rb), "i"(0), "i"(0), "i"(2), "r"(0) : "memory");
>> - asm volatile("eieio; tlbsync; ptesync; isync; slbia": : :"memory");
>> - trace_tlbie(0, 0, rb, 0, 2, 0, 0);
>> -
>> - /*
>> - * now switch the HID
>> - */
>> - hid0  = mfspr(SPRN_HID0);
>> - hid0 &= ~HID0_POWER9_RADIX;
>> - mtspr(SPRN_HID0, hid0);
>> - asm volatile("isync": : :"memory");
>> -
>> - /* Wait for it to happen */
>> - while ((mfspr(SPRN_HID0) & HID0_POWER9_RADIX))
>> - cpu_relax();
>> -}
>> -
>>  static void __init hash_init_partition_table(phys_addr_t hash_table,
>>       unsigned long htab_size)
>>  {
>> @@ -839,8 +814,6 @@ static void __init hash_init_partition_table(phys_addr_t hash_table,
>>   htab_size =  __ilog2(htab_size) - 18;
>>   mmu_partition_table_set_entry(0, hash_table | htab_size, 0);
>>   pr_info("Partition table %p\n", partition_tb);
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - update_hid_for_hash();
>>  }
>>  
>>  static void __init htab_initialize(void)
>> @@ -1063,9 +1036,6 @@ void hash__early_init_mmu_secondary(void)
>>   /* Initialize hash table for that CPU */
>>   if (!firmware_has_feature(FW_FEATURE_LPAR)) {
>>  
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - update_hid_for_hash();
>> -
>>   if (!cpu_has_feature(CPU_FTR_ARCH_300))
>>   mtspr(SPRN_SDR1, _SDR1);
>>   else
>> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
>> index 79e1378..6f7b831 100644
>> --- a/arch/powerpc/mm/hugetlbpage.c
>> +++ b/arch/powerpc/mm/hugetlbpage.c
>> @@ -609,15 +609,12 @@ static int __init add_huge_page_size(unsigned long long size)
>>   * firmware we only add hugetlb support for page sizes that can be
>>   * supported by linux page table layout.
>>   * For now we have
>> - * Radix: 2M
>> + * Radix: 2M and 1G
>>   * Hash: 16M and 16G
>>   */
>>   if (radix_enabled()) {
>> - if (mmu_psize != MMU_PAGE_2M) {
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1) ||
>> -    (mmu_psize != MMU_PAGE_1G))
>> - return -EINVAL;
>> - }
>> + if (mmu_psize != MMU_PAGE_2M && mmu_psize != MMU_PAGE_1G)
>> + return -EINVAL;
>>   } else {
>>   if (mmu_psize != MMU_PAGE_16M && mmu_psize != MMU_PAGE_16G)
>>   return -EINVAL;
>> diff --git a/arch/powerpc/mm/mmu_context_book3s64.c b/arch/powerpc/mm/mmu_context_book3s64.c
>> index 5066276..208f687 100644
>> --- a/arch/powerpc/mm/mmu_context_book3s64.c
>> +++ b/arch/powerpc/mm/mmu_context_book3s64.c
>> @@ -250,15 +250,7 @@ void arch_exit_mmap(struct mm_struct *mm)
>>  #ifdef CONFIG_PPC_RADIX_MMU
>>  void radix__switch_mmu_context(struct mm_struct *prev, struct mm_struct *next)
>>  {
>> -
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
>> - isync();
>> - mtspr(SPRN_PID, next->context.id);
>> - isync();
>> - asm volatile(PPC_INVALIDATE_ERAT : : :"memory");
>> - } else {
>> - mtspr(SPRN_PID, next->context.id);
>> - isync();
>> - }
>> + mtspr(SPRN_PID, next->context.id);
>> + isync();
>>  }
>>  #endif
>> diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
>> index a778560..704362d 100644
>> --- a/arch/powerpc/mm/pgtable-radix.c
>> +++ b/arch/powerpc/mm/pgtable-radix.c
>> @@ -171,16 +171,6 @@ void radix__mark_rodata_ro(void)
>>  {
>>   unsigned long start, end;
>>  
>> - /*
>> - * mark_rodata_ro() will mark itself as !writable at some point.
>> - * Due to DD1 workaround in radix__pte_update(), we'll end up with
>> - * an invalid pte and the system will crash quite severly.
>> - */
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
>> - pr_warn("Warning: Unable to mark rodata read only on P9 DD1\n");
>> - return;
>> - }
>> -
>>   start = (unsigned long)_stext;
>>   end = (unsigned long)__init_begin;
>>  
>> @@ -470,35 +460,6 @@ void __init radix__early_init_devtree(void)
>>   return;
>>  }
>>  
>> -static void update_hid_for_radix(void)
>> -{
>> - unsigned long hid0;
>> - unsigned long rb = 3UL << PPC_BITLSHIFT(53); /* IS = 3 */
>> -
>> - asm volatile("ptesync": : :"memory");
>> - /* prs = 0, ric = 2, rs = 0, r = 1 is = 3 */
>> - asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
>> -     : : "r"(rb), "i"(1), "i"(0), "i"(2), "r"(0) : "memory");
>> - /* prs = 1, ric = 2, rs = 0, r = 1 is = 3 */
>> - asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
>> -     : : "r"(rb), "i"(1), "i"(1), "i"(2), "r"(0) : "memory");
>> - asm volatile("eieio; tlbsync; ptesync; isync; slbia": : :"memory");
>> - trace_tlbie(0, 0, rb, 0, 2, 0, 1);
>> - trace_tlbie(0, 0, rb, 0, 2, 1, 1);
>> -
>> - /*
>> - * now switch the HID
>> - */
>> - hid0  = mfspr(SPRN_HID0);
>> - hid0 |= HID0_POWER9_RADIX;
>> - mtspr(SPRN_HID0, hid0);
>> - asm volatile("isync": : :"memory");
>> -
>> - /* Wait for it to happen */
>> - while (!(mfspr(SPRN_HID0) & HID0_POWER9_RADIX))
>> - cpu_relax();
>> -}
>> -
>>  static void radix_init_amor(void)
>>  {
>>   /*
>> @@ -513,22 +474,12 @@ static void radix_init_amor(void)
>>  
>>  static void radix_init_iamr(void)
>>  {
>> - unsigned long iamr;
>> -
>> - /*
>> - * The IAMR should set to 0 on DD1.
>> - */
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - iamr = 0;
>> - else
>> - iamr = (1ul << 62);
>> -
>>   /*
>>   * Radix always uses key0 of the IAMR to determine if an access is
>>   * allowed. We set bit 0 (IBM bit 1) of key0, to prevent instruction
>>   * fetch.
>>   */
>> - mtspr(SPRN_IAMR, iamr);
>> + mtspr(SPRN_IAMR, (1ul << 62));
>>  }
>>  
>>  void __init radix__early_init_mmu(void)
>> @@ -583,8 +534,6 @@ void __init radix__early_init_mmu(void)
>>  
>>   if (!firmware_has_feature(FW_FEATURE_LPAR)) {
>>   radix_init_native();
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - update_hid_for_radix();
>>   lpcr = mfspr(SPRN_LPCR);
>>   mtspr(SPRN_LPCR, lpcr | LPCR_UPRT | LPCR_HR);
>>   radix_init_partition_table();
>> @@ -608,10 +557,6 @@ void radix__early_init_mmu_secondary(void)
>>   * update partition table control register and UPRT
>>   */
>>   if (!firmware_has_feature(FW_FEATURE_LPAR)) {
>> -
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - update_hid_for_radix();
>> -
>>   lpcr = mfspr(SPRN_LPCR);
>>   mtspr(SPRN_LPCR, lpcr | LPCR_UPRT | LPCR_HR);
>>  
>> @@ -1029,8 +974,7 @@ void radix__ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep,
>>   * To avoid NMMU hang while relaxing access, we need mark
>>   * the pte invalid in between.
>>   */
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1) ||
>> -    atomic_read(&mm->context.copros) > 0) {
>> + if (atomic_read(&mm->context.copros) > 0) {
>>   unsigned long old_pte, new_pte;
>>  
>>   old_pte = __radix_pte_update(ptep, ~0, 0);
>> diff --git a/arch/powerpc/mm/tlb-radix.c b/arch/powerpc/mm/tlb-radix.c
>> index c07c2f0..b0cad4f 100644
>> --- a/arch/powerpc/mm/tlb-radix.c
>> +++ b/arch/powerpc/mm/tlb-radix.c
>> @@ -658,24 +658,6 @@ void radix__flush_tlb_all(void)
>>   asm volatile("eieio; tlbsync; ptesync": : :"memory");
>>  }
>>  
>> -void radix__flush_tlb_pte_p9_dd1(unsigned long old_pte, struct mm_struct *mm,
>> - unsigned long address)
>> -{
>> - /*
>> - * We track page size in pte only for DD1, So we can
>> - * call this only on DD1.
>> - */
>> - if (!cpu_has_feature(CPU_FTR_POWER9_DD1)) {
>> - VM_WARN_ON(1);
>> - return;
>> - }
>> -
>> - if (old_pte & R_PAGE_LARGE)
>> - radix__flush_tlb_page_psize(mm, address, MMU_PAGE_2M);
>> - else
>> - radix__flush_tlb_page_psize(mm, address, mmu_virtual_psize);
>> -}
>> -
>>  #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
>>  extern void radix_kvm_prefetch_workaround(struct mm_struct *mm)
>>  {
>> diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
>> index b7a6044..8ce6673 100644
>> --- a/arch/powerpc/perf/core-book3s.c
>> +++ b/arch/powerpc/perf/core-book3s.c
>> @@ -128,10 +128,6 @@ static inline void power_pmu_bhrb_disable(struct perf_event *event) {}
>>  static void power_pmu_sched_task(struct perf_event_context *ctx, bool sched_in) {}
>>  static inline void power_pmu_bhrb_read(struct cpu_hw_events *cpuhw) {}
>>  static void pmao_restore_workaround(bool ebb) { }
>> -static bool use_ic(u64 event)
>> -{
>> - return false;
>> -}
>>  #endif /* CONFIG_PPC32 */
>>  
>>  static bool regs_use_siar(struct pt_regs *regs)
>> @@ -710,14 +706,6 @@ static void pmao_restore_workaround(bool ebb)
>>   mtspr(SPRN_PMC6, pmcs[5]);
>>  }
>>  
>> -static bool use_ic(u64 event)
>> -{
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1) &&
>> - (event == 0x200f2 || event == 0x300f2))
>> - return true;
>> -
>> - return false;
>> -}
>>  #endif /* CONFIG_PPC64 */
>>  
>>  static void perf_event_interrupt(struct pt_regs *regs);
>> @@ -1042,7 +1030,6 @@ static u64 check_and_compute_delta(u64 prev, u64 val)
>>  static void power_pmu_read(struct perf_event *event)
>>  {
>>   s64 val, delta, prev;
>> - struct cpu_hw_events *cpuhw = this_cpu_ptr(&cpu_hw_events);
>>  
>>   if (event->hw.state & PERF_HES_STOPPED)
>>   return;
>> @@ -1052,13 +1039,6 @@ static void power_pmu_read(struct perf_event *event)
>>  
>>   if (is_ebb_event(event)) {
>>   val = read_pmc(event->hw.idx);
>> - if (use_ic(event->attr.config)) {
>> - val = mfspr(SPRN_IC);
>> - if (val > cpuhw->ic_init)
>> - val = val - cpuhw->ic_init;
>> - else
>> - val = val + (0 - cpuhw->ic_init);
>> - }
>>   local64_set(&event->hw.prev_count, val);
>>   return;
>>   }
>> @@ -1072,13 +1052,6 @@ static void power_pmu_read(struct perf_event *event)
>>   prev = local64_read(&event->hw.prev_count);
>>   barrier();
>>   val = read_pmc(event->hw.idx);
>> - if (use_ic(event->attr.config)) {
>> - val = mfspr(SPRN_IC);
>> - if (val > cpuhw->ic_init)
>> - val = val - cpuhw->ic_init;
>> - else
>> - val = val + (0 - cpuhw->ic_init);
>> - }
>>   delta = check_and_compute_delta(prev, val);
>>   if (!delta)
>>   return;
>> @@ -1531,13 +1504,6 @@ static int power_pmu_add(struct perf_event *event, int ef_flags)
>>   event->attr.branch_sample_type);
>>   }
>>  
>> - /*
>> - * Workaround for POWER9 DD1 to use the Instruction Counter
>> - * register value for instruction counting
>> - */
>> - if (use_ic(event->attr.config))
>> - cpuhw->ic_init = mfspr(SPRN_IC);
>> -
>>   perf_pmu_enable(event->pmu);
>>   local_irq_restore(flags);
>>   return ret;
>> diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c
>> index 2efee3f..177de81 100644
>> --- a/arch/powerpc/perf/isa207-common.c
>> +++ b/arch/powerpc/perf/isa207-common.c
>> @@ -59,7 +59,7 @@ static bool is_event_valid(u64 event)
>>  {
>>   u64 valid_mask = EVENT_VALID_MASK;
>>  
>> - if (cpu_has_feature(CPU_FTR_ARCH_300) && !cpu_has_feature(CPU_FTR_POWER9_DD1))
>> + if (cpu_has_feature(CPU_FTR_ARCH_300))
>>   valid_mask = p9_EVENT_VALID_MASK;
>>  
>>   return !(event & ~valid_mask);
>> @@ -86,8 +86,6 @@ static void mmcra_sdar_mode(u64 event, unsigned long *mmcra)
>>   * Incase of Power9:
>>   * Marked event: MMCRA[SDAR_MODE] will be set to 0b00 ('No Updates'),
>>   *               or if group already have any marked events.
>> - * Non-Marked events (for DD1):
>> - * MMCRA[SDAR_MODE] will be set to 0b01
>>   * For rest
>>   * MMCRA[SDAR_MODE] will be set from event code.
>>   *      If sdar_mode from event is zero, default to 0b01. Hardware
>> @@ -96,7 +94,7 @@ static void mmcra_sdar_mode(u64 event, unsigned long *mmcra)
>>   if (cpu_has_feature(CPU_FTR_ARCH_300)) {
>>   if (is_event_marked(event) || (*mmcra & MMCRA_SAMPLE_ENABLE))
>>   *mmcra &= MMCRA_SDAR_MODE_NO_UPDATES;
>> - else if (!cpu_has_feature(CPU_FTR_POWER9_DD1) && p9_SDAR_MODE(event))
>> + else if (p9_SDAR_MODE(event))
>>   *mmcra |=  p9_SDAR_MODE(event) << MMCRA_SDAR_MODE_SHIFT;
>>   else
>>   *mmcra |= MMCRA_SDAR_MODE_DCACHE;
>> @@ -106,7 +104,7 @@ static void mmcra_sdar_mode(u64 event, unsigned long *mmcra)
>>  
>>  static u64 thresh_cmp_val(u64 value)
>>  {
>> - if (cpu_has_feature(CPU_FTR_ARCH_300) && !cpu_has_feature(CPU_FTR_POWER9_DD1))
>> + if (cpu_has_feature(CPU_FTR_ARCH_300))
>>   return value << p9_MMCRA_THR_CMP_SHIFT;
>>  
>>   return value << MMCRA_THR_CMP_SHIFT;
>> @@ -114,7 +112,7 @@ static u64 thresh_cmp_val(u64 value)
>>  
>>  static unsigned long combine_from_event(u64 event)
>>  {
>> - if (cpu_has_feature(CPU_FTR_ARCH_300) && !cpu_has_feature(CPU_FTR_POWER9_DD1))
>> + if (cpu_has_feature(CPU_FTR_ARCH_300))
>>   return p9_EVENT_COMBINE(event);
>>  
>>   return EVENT_COMBINE(event);
>> @@ -122,7 +120,7 @@ static unsigned long combine_from_event(u64 event)
>>  
>>  static unsigned long combine_shift(unsigned long pmc)
>>  {
>> - if (cpu_has_feature(CPU_FTR_ARCH_300) && !cpu_has_feature(CPU_FTR_POWER9_DD1))
>> + if (cpu_has_feature(CPU_FTR_ARCH_300))
>>   return p9_MMCR1_COMBINE_SHIFT(pmc);
>>  
>>   return MMCR1_COMBINE_SHIFT(pmc);
>> diff --git a/arch/powerpc/perf/isa207-common.h b/arch/powerpc/perf/isa207-common.h
>> index 6c737d6..479dec2 100644
>> --- a/arch/powerpc/perf/isa207-common.h
>> +++ b/arch/powerpc/perf/isa207-common.h
>> @@ -222,11 +222,6 @@
>>   CNST_PMC_VAL(1) | CNST_PMC_VAL(2) | CNST_PMC_VAL(3) | \
>>   CNST_PMC_VAL(4) | CNST_PMC_VAL(5) | CNST_PMC_VAL(6) | CNST_NC_VAL
>>  
>> -/*
>> - * Lets restrict use of PMC5 for instruction counting.
>> - */
>> -#define P9_DD1_TEST_ADDER (ISA207_TEST_ADDER | CNST_PMC_VAL(5))
>> -
>>  /* Bits in MMCR1 for PowerISA v2.07 */
>>  #define MMCR1_UNIT_SHIFT(pmc) (60 - (4 * ((pmc) - 1)))
>>  #define MMCR1_COMBINE_SHIFT(pmc) (35 - ((pmc) - 1))
>> diff --git a/arch/powerpc/perf/power9-pmu.c b/arch/powerpc/perf/power9-pmu.c
>> index 24b5b5b..3d055c8 100644
>> --- a/arch/powerpc/perf/power9-pmu.c
>> +++ b/arch/powerpc/perf/power9-pmu.c
>> @@ -183,12 +183,6 @@ static struct attribute_group power9_pmu_events_group = {
>>   .attrs = power9_events_attr,
>>  };
>>  
>> -static const struct attribute_group *power9_isa207_pmu_attr_groups[] = {
>> - &isa207_pmu_format_group,
>> - &power9_pmu_events_group,
>> - NULL,
>> -};
>> -
>>  PMU_FORMAT_ATTR(event, "config:0-51");
>>  PMU_FORMAT_ATTR(pmcxsel, "config:0-7");
>>  PMU_FORMAT_ATTR(mark, "config:8");
>> @@ -231,17 +225,6 @@ static const struct attribute_group *power9_pmu_attr_groups[] = {
>>   NULL,
>>  };
>>  
>> -static int power9_generic_events_dd1[] = {
>> - [PERF_COUNT_HW_CPU_CYCLES] = PM_CYC,
>> - [PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = PM_ICT_NOSLOT_CYC,
>> - [PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = PM_CMPLU_STALL,
>> - [PERF_COUNT_HW_INSTRUCTIONS] = PM_INST_DISP,
>> - [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = PM_BR_CMPL_ALT,
>> - [PERF_COUNT_HW_BRANCH_MISSES] = PM_BR_MPRED_CMPL,
>> - [PERF_COUNT_HW_CACHE_REFERENCES] = PM_LD_REF_L1,
>> - [PERF_COUNT_HW_CACHE_MISSES] = PM_LD_MISS_L1_FIN,
>> -};
>> -
>>  static int power9_generic_events[] = {
>>   [PERF_COUNT_HW_CPU_CYCLES] = PM_CYC,
>>   [PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = PM_ICT_NOSLOT_CYC,
>> @@ -403,25 +386,6 @@ static int power9_cache_events[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = {
>>  
>>  #undef C
>>  
>> -static struct power_pmu power9_isa207_pmu = {
>> - .name = "POWER9",
>> - .n_counter = MAX_PMU_COUNTERS,
>> - .add_fields = ISA207_ADD_FIELDS,
>> - .test_adder = P9_DD1_TEST_ADDER,
>> - .compute_mmcr = isa207_compute_mmcr,
>> - .config_bhrb = power9_config_bhrb,
>> - .bhrb_filter_map = power9_bhrb_filter_map,
>> - .get_constraint = isa207_get_constraint,
>> - .get_alternatives = power9_get_alternatives,
>> - .disable_pmc = isa207_disable_pmc,
>> - .flags = PPMU_NO_SIAR | PPMU_ARCH_207S,
>> - .n_generic = ARRAY_SIZE(power9_generic_events_dd1),
>> - .generic_events = power9_generic_events_dd1,
>> - .cache_events = &power9_cache_events,
>> - .attr_groups = power9_isa207_pmu_attr_groups,
>> - .bhrb_nr = 32,
>> -};
>> -
>>  static struct power_pmu power9_pmu = {
>>   .name = "POWER9",
>>   .n_counter = MAX_PMU_COUNTERS,
>> @@ -452,23 +416,7 @@ static int __init init_power9_pmu(void)
>>      strcmp(cur_cpu_spec->oprofile_cpu_type, "ppc64/power9"))
>>   return -ENODEV;
>>  
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
>> - /*
>> - * Since PM_INST_CMPL may not provide right counts in all
>> - * sampling scenarios in power9 DD1, instead use PM_INST_DISP.
>> - */
>> - EVENT_VAR(PM_INST_CMPL, _g).id = PM_INST_DISP;
>> - /*
>> - * Power9 DD1 should use PM_BR_CMPL_ALT event code for
>> - * "branches" to provide correct counter value.
>> - */
>> - EVENT_VAR(PM_BR_CMPL, _g).id = PM_BR_CMPL_ALT;
>> - EVENT_VAR(PM_BR_CMPL, _c).id = PM_BR_CMPL_ALT;
>> - rc = register_power_pmu(&power9_isa207_pmu);
>> - } else {
>> - rc = register_power_pmu(&power9_pmu);
>> - }
>> -
>> + rc = register_power_pmu(&power9_pmu);
>>   if (rc)
>>   return rc;
>>  
>> diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
>> index 3776a58..113d647 100644
>> --- a/arch/powerpc/platforms/powernv/idle.c
>> +++ b/arch/powerpc/platforms/powernv/idle.c
>> @@ -177,11 +177,6 @@ static void pnv_alloc_idle_core_states(void)
>>   paca[cpu].core_idle_state_ptr = core_idle_state;
>>   paca[cpu].thread_idle_state = PNV_THREAD_RUNNING;
>>   paca[cpu].thread_mask = 1 << j;
>> - if (!cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - continue;
>> - paca[cpu].thread_sibling_pacas =
>> - kmalloc_node(paca_ptr_array_size,
>> -     GFP_KERNEL, node);
>>   }
>>   }
>>  
>> @@ -813,28 +808,6 @@ static int __init pnv_init_idle_states(void)
>>  
>>   pnv_alloc_idle_core_states();
>>  
>> - /*
>> - * For each CPU, record its PACA address in each of it's
>> - * sibling thread's PACA at the slot corresponding to this
>> - * CPU's index in the core.
>> - */
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
>> - int cpu;
>> -
>> - pr_info("powernv: idle: Saving PACA pointers of all CPUs in their thread sibling PACA\n");
>> - for_each_present_cpu(cpu) {
>> - int base_cpu = cpu_first_thread_sibling(cpu);
>> - int idx = cpu_thread_in_core(cpu);
>> - int i;
>> -
>> - for (i = 0; i < threads_per_core; i++) {
>> - int j = base_cpu + i;
>> -
>> - paca[j].thread_sibling_pacas[idx] = &paca[cpu];
>> - }
>> - }
>> - }
>> -
>>   if (supported_cpuidle_states & OPAL_PM_NAP_ENABLED)
>>   ppc_md.power_save = power7_idle;
>>  
>> diff --git a/arch/powerpc/platforms/powernv/smp.c b/arch/powerpc/platforms/powernv/smp.c
>> index 9664c84..f7dec55 100644
>> --- a/arch/powerpc/platforms/powernv/smp.c
>> +++ b/arch/powerpc/platforms/powernv/smp.c
>> @@ -283,23 +283,6 @@ static void pnv_cause_ipi(int cpu)
>>   ic_cause_ipi(cpu);
>>  }
>>  
>> -static void pnv_p9_dd1_cause_ipi(int cpu)
>> -{
>> - int this_cpu = get_cpu();
>> -
>> - /*
>> - * POWER9 DD1 has a global addressed msgsnd, but for now we restrict
>> - * IPIs to same core, because it requires additional synchronization
>> - * for inter-core doorbells which we do not implement.
>> - */
>> - if (cpumask_test_cpu(cpu, cpu_sibling_mask(this_cpu)))
>> - doorbell_global_ipi(cpu);
>> - else
>> - ic_cause_ipi(cpu);
>> -
>> - put_cpu();
>> -}
>> -
>>  static void __init pnv_smp_probe(void)
>>  {
>>   if (xive_enabled())
>> @@ -311,14 +294,10 @@ static void __init pnv_smp_probe(void)
>>   ic_cause_ipi = smp_ops->cause_ipi;
>>   WARN_ON(!ic_cause_ipi);
>>  
>> - if (cpu_has_feature(CPU_FTR_ARCH_300)) {
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - smp_ops->cause_ipi = pnv_p9_dd1_cause_ipi;
>> - else
>> - smp_ops->cause_ipi = doorbell_global_ipi;
>> - } else {
>> + if (cpu_has_feature(CPU_FTR_ARCH_300))
>> + smp_ops->cause_ipi = doorbell_global_ipi;
>> + else
>>   smp_ops->cause_ipi = pnv_cause_ipi;
>> - }
>>   }
>>  }
>>  
>> diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c
>> index a3b8d7d..82cc999 100644
>> --- a/arch/powerpc/sysdev/xive/common.c
>> +++ b/arch/powerpc/sysdev/xive/common.c
>> @@ -319,7 +319,7 @@ void xive_do_source_eoi(u32 hw_irq, struct xive_irq_data *xd)
>>   * The FW told us to call it. This happens for some
>>   * interrupt sources that need additional HW whacking
>>   * beyond the ESB manipulation. For example LPC interrupts
>> - * on P9 DD1.0 need a latch to be clared in the LPC bridge
>> + * on P9 DD1.0 needed a latch to be clared in the LPC bridge
>>   * itself. The Firmware will take care of it.
>>   */
>>   if (WARN_ON_ONCE(!xive_ops->eoi))
>> @@ -337,9 +337,9 @@ void xive_do_source_eoi(u32 hw_irq, struct xive_irq_data *xd)
>>   * This allows us to then do a re-trigger if Q was set
>>   * rather than synthesizing an interrupt in software
>>   *
>> - * For LSIs, using the HW EOI cycle works around a problem
>> - * on P9 DD1 PHBs where the other ESB accesses don't work
>> - * properly.
>> + * For LSIs the HW EOI cycle is used rather than PQ bits,
>> + * as they are automatically re-triggred in HW when still
>> + * pending.
>>   */
>>   if (xd->flags & XIVE_IRQ_FLAG_LSI)
>>   xive_esb_read(xd, XIVE_ESB_LOAD_EOI);
>> diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
>> index 8a57ff1..c6156b6 100644
>> --- a/drivers/misc/cxl/cxl.h
>> +++ b/drivers/misc/cxl/cxl.h
>> @@ -865,14 +865,6 @@ static inline bool cxl_is_power9(void)
>>   return false;
>>  }
>>  
>> -static inline bool cxl_is_power9_dd1(void)
>> -{
>> - if ((pvr_version_is(PVR_POWER9)) &&
>> -    cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - return true;
>> - return false;
>> -}
>> -
>>  ssize_t cxl_pci_afu_read_err_buffer(struct cxl_afu *afu, char *buf,
>>   loff_t off, size_t count);
>>  
>> diff --git a/drivers/misc/cxl/cxllib.c b/drivers/misc/cxl/cxllib.c
>> index 0bc7c31..5a3f912 100644
>> --- a/drivers/misc/cxl/cxllib.c
>> +++ b/drivers/misc/cxl/cxllib.c
>> @@ -102,10 +102,6 @@ int cxllib_get_xsl_config(struct pci_dev *dev, struct cxllib_xsl_config *cfg)
>>   rc = cxl_get_xsl9_dsnctl(dev, capp_unit_id, &cfg->dsnctl);
>>   if (rc)
>>   return rc;
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
>> - /* workaround for DD1 - nbwind = capiind */
>> - cfg->dsnctl |= ((u64)0x02 << (63-47));
>> - }
>>  
>>   cfg->version  = CXL_XSL_CONFIG_CURRENT_VERSION;
>>   cfg->log_bar_size = CXL_CAPI_WINDOW_LOG_SIZE;
>> diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
>> index 429d6de..2af0d4c 100644
>> --- a/drivers/misc/cxl/pci.c
>> +++ b/drivers/misc/cxl/pci.c
>> @@ -465,23 +465,21 @@ int cxl_get_xsl9_dsnctl(struct pci_dev *dev, u64 capp_unit_id, u64 *reg)
>>   /* nMMU_ID Defaults to: b’000001001’*/
>>   xsl_dsnctl |= ((u64)0x09 << (63-28));
>>  
>> - if (!(cxl_is_power9_dd1())) {
>> - /*
>> - * Used to identify CAPI packets which should be sorted into
>> - * the Non-Blocking queues by the PHB. This field should match
>> - * the PHB PBL_NBW_CMPM register
>> - * nbwind=0x03, bits [57:58], must include capi indicator.
>> - * Not supported on P9 DD1.
>> - */
>> - xsl_dsnctl |= (nbwind << (63-55));
>> + /*
>> + * Used to identify CAPI packets which should be sorted into
>> + * the Non-Blocking queues by the PHB. This field should match
>> + * the PHB PBL_NBW_CMPM register
>> + * nbwind=0x03, bits [57:58], must include capi indicator.
>> + * Not supported on P9 DD1.
>> + */
>> + xsl_dsnctl |= (nbwind << (63-55));
>>  
>> - /*
>> - * Upper 16b address bits of ASB_Notify messages sent to the
>> - * system. Need to match the PHB’s ASN Compare/Mask Register.
>> - * Not supported on P9 DD1.
>> - */
>> - xsl_dsnctl |= asnind;
>> - }
>> + /*
>> + * Upper 16b address bits of ASB_Notify messages sent to the
>> + * system. Need to match the PHB’s ASN Compare/Mask Register.
>> + * Not supported on P9 DD1.
>> + */
>> + xsl_dsnctl |= asnind;
>>  
>>   *reg = xsl_dsnctl;
>>   return 0;
>> @@ -539,15 +537,8 @@ static int init_implementation_adapter_regs_psl9(struct cxl *adapter,
>>   /* Snoop machines */
>>   cxl_p1_write(adapter, CXL_PSL9_APCDEDALLOC, 0x800F000200000000ULL);
>>  
>> - if (cxl_is_power9_dd1()) {
>> - /* Disabling deadlock counter CAR */
>> - cxl_p1_write(adapter, CXL_PSL9_GP_CT, 0x0020000000000001ULL);
>> - /* Enable NORST */
>> - cxl_p1_write(adapter, CXL_PSL9_DEBUG, 0x8000000000000000ULL);
>> - } else {
>> - /* Enable NORST and DD2 features */
>> - cxl_p1_write(adapter, CXL_PSL9_DEBUG, 0xC000000000000000ULL);
>> - }
>> + /* Enable NORST and DD2 features */
>> + cxl_p1_write(adapter, CXL_PSL9_DEBUG, 0xC000000000000000ULL);
>>  
>>   /*
>>   * Check if PSL has data-cache. We need to flush adapter datacache
>>
>
>
>


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

NAK: [SRU][Bionic][Cosmic][PATCH 0/3] Fixes for LP:1792195

Joseph Salisbury-3
In reply to this post by Joseph Salisbury-3
IBM confirmed commit 2bf1071a8d50928a is not specifically needed for
this bug and will open a new bug for it.  I'll resubmit this SRU request
minus that commit.

--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team
Reply | Threaded
Open this post in threaded view
|

Re: [SRU][Bionic][Cosmic][PATCH 1/3] powerpc/64s: Remove POWER9 DD1 support

Joseph Salisbury-3
In reply to this post by Stefan Bader-2
On 10/12/2018 10:39 AM, Stefan Bader wrote:

> On 11.10.2018 18:22, Joseph Salisbury wrote:
>> From: Nicholas Piggin <[hidden email]>
>>
>> BugLink: https://bugs.launchpad.net/bugs/1792195
>>
>> POWER9 DD1 was never a product. It is no longer supported by upstream
>> firmware, and it is not effectively supported in Linux due to lack of
>> testing.
> I am not really happy to see such a large portion of code getting ripped out
> *after* release. One never knows whether there really was nothing that did not
> still make use of some parts...
>
> Is this part really strictly required for Bionic?
>
> -Stefan
IBM confirmed that commit 2bf1071a8d5092 is not specifically needed for
this bug.  I had them test without that commit and they confirmed the
bug is still fixed.  They still want that commit in at some point, but
will open a new bug with that request.  Here is the feedback from them:

"I think that leaving out DD1 is ok - but at some point if it's still
there it may make some other backport much harder - but we can also just
deal with that when needed.

The backport removing DD1 also included the fix for powerpc/64s:
dt_cpu_ftrs fix POWER9 DD2.2 and above (9e9626e)".

>
>> Signed-off-by: Nicholas Piggin <[hidden email]>
>> Reviewed-by: Michael Ellerman <[hidden email]>
>> [mpe: Remove arch_make_huge_pte() entirely]
>> Signed-off-by: Michael Ellerman <[hidden email]>
>> (backported from commit 2bf1071a8d50928a4ae366bb3108833166c2b70c)
>> Signed-off-by: Michael Ranweiler <[hidden email]>
>> Signed-off-by: Joseph Salisbury <[hidden email]>
>> ---
>>  arch/powerpc/include/asm/book3s/64/hugetlb.h       | 20 --------
>>  arch/powerpc/include/asm/book3s/64/pgtable.h       |  5 +-
>>  arch/powerpc/include/asm/book3s/64/radix.h         | 35 ++-----------
>>  .../powerpc/include/asm/book3s/64/tlbflush-radix.h |  2 -
>>  arch/powerpc/include/asm/cputable.h                |  6 +--
>>  arch/powerpc/include/asm/paca.h                    |  5 --
>>  arch/powerpc/kernel/asm-offsets.c                  |  1 -
>>  arch/powerpc/kernel/cputable.c                     | 20 --------
>>  arch/powerpc/kernel/dt_cpu_ftrs.c                  | 13 +++--
>>  arch/powerpc/kernel/exceptions-64s.S               |  4 +-
>>  arch/powerpc/kernel/idle_book3s.S                  | 50 ------------------
>>  arch/powerpc/kernel/process.c                      | 10 +---
>>  arch/powerpc/kvm/book3s_64_mmu_radix.c             | 15 +-----
>>  arch/powerpc/kvm/book3s_hv.c                       | 10 ----
>>  arch/powerpc/kvm/book3s_hv_rmhandlers.S            | 16 +-----
>>  arch/powerpc/kvm/book3s_xive_template.c            | 39 +++++---------
>>  arch/powerpc/mm/hash_utils_64.c                    | 30 -----------
>>  arch/powerpc/mm/hugetlbpage.c                      |  9 ++--
>>  arch/powerpc/mm/mmu_context_book3s64.c             | 12 +----
>>  arch/powerpc/mm/pgtable-radix.c                    | 60 +---------------------
>>  arch/powerpc/mm/tlb-radix.c                        | 18 -------
>>  arch/powerpc/perf/core-book3s.c                    | 34 ------------
>>  arch/powerpc/perf/isa207-common.c                  | 12 ++---
>>  arch/powerpc/perf/isa207-common.h                  |  5 --
>>  arch/powerpc/perf/power9-pmu.c                     | 54 +------------------
>>  arch/powerpc/platforms/powernv/idle.c              | 27 ----------
>>  arch/powerpc/platforms/powernv/smp.c               | 27 ++--------
>>  arch/powerpc/sysdev/xive/common.c                  |  8 +--
>>  drivers/misc/cxl/cxl.h                             |  8 ---
>>  drivers/misc/cxl/cxllib.c                          |  4 --
>>  drivers/misc/cxl/pci.c                             | 41 ++++++---------
>>  31 files changed, 70 insertions(+), 530 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/book3s/64/hugetlb.h b/arch/powerpc/include/asm/book3s/64/hugetlb.h
>> index c459f93..5088838 100644
>> --- a/arch/powerpc/include/asm/book3s/64/hugetlb.h
>> +++ b/arch/powerpc/include/asm/book3s/64/hugetlb.h
>> @@ -32,26 +32,6 @@ static inline int hstate_get_psize(struct hstate *hstate)
>>   }
>>  }
>>  
>> -#define arch_make_huge_pte arch_make_huge_pte
>> -static inline pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
>> -       struct page *page, int writable)
>> -{
>> - unsigned long page_shift;
>> -
>> - if (!cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - return entry;
>> -
>> - page_shift = huge_page_shift(hstate_vma(vma));
>> - /*
>> - * We don't support 1G hugetlb pages yet.
>> - */
>> - VM_WARN_ON(page_shift == mmu_psize_defs[MMU_PAGE_1G].shift);
>> - if (page_shift == mmu_psize_defs[MMU_PAGE_2M].shift)
>> - return __pte(pte_val(entry) | R_PAGE_LARGE);
>> - else
>> - return entry;
>> -}
>> -
>>  #ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE
>>  static inline bool gigantic_page_supported(void)
>>  {
>> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> index bddf18a..674990c 100644
>> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
>> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
>> @@ -454,9 +454,8 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
>>  {
>>   if (full && radix_enabled()) {
>>   /*
>> - * Let's skip the DD1 style pte update here. We know that
>> - * this is a full mm pte clear and hence can be sure there is
>> - * no parallel set_pte.
>> + * We know that this is a full mm pte clear and
>> + * hence can be sure there is no parallel set_pte.
>>   */
>>   return radix__ptep_get_and_clear_full(mm, addr, ptep, full);
>>   }
>> diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h
>> index 2509344..eaa4591 100644
>> --- a/arch/powerpc/include/asm/book3s/64/radix.h
>> +++ b/arch/powerpc/include/asm/book3s/64/radix.h
>> @@ -12,12 +12,6 @@
>>  #include <asm/book3s/64/radix-4k.h>
>>  #endif
>>  
>> -/*
>> - * For P9 DD1 only, we need to track whether the pte's huge.
>> - */
>> -#define R_PAGE_LARGE _RPAGE_RSV1
>> -
>> -
>>  #ifndef __ASSEMBLY__
>>  #include <asm/book3s/64/tlbflush-radix.h>
>>  #include <asm/cpu_has_feature.h>
>> @@ -153,20 +147,7 @@ static inline unsigned long radix__pte_update(struct mm_struct *mm,
>>  {
>>   unsigned long old_pte;
>>  
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
>> -
>> - unsigned long new_pte;
>> -
>> - old_pte = __radix_pte_update(ptep, ~0ul, 0);
>> - /*
>> - * new value of pte
>> - */
>> - new_pte = (old_pte | set) & ~clr;
>> - radix__flush_tlb_pte_p9_dd1(old_pte, mm, addr);
>> - if (new_pte)
>> - __radix_pte_update(ptep, 0, new_pte);
>> - } else
>> - old_pte = __radix_pte_update(ptep, clr, set);
>> + old_pte = __radix_pte_update(ptep, clr, set);
>>   if (!huge)
>>   assert_pte_locked(mm, addr);
>>  
>> @@ -241,8 +222,6 @@ static inline int radix__pmd_trans_huge(pmd_t pmd)
>>  
>>  static inline pmd_t radix__pmd_mkhuge(pmd_t pmd)
>>  {
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - return __pmd(pmd_val(pmd) | _PAGE_PTE | R_PAGE_LARGE);
>>   return __pmd(pmd_val(pmd) | _PAGE_PTE);
>>  }
>>  static inline void radix__pmdp_huge_split_prepare(struct vm_area_struct *vma,
>> @@ -279,18 +258,14 @@ static inline unsigned long radix__get_tree_size(void)
>>   unsigned long rts_field;
>>   /*
>>   * We support 52 bits, hence:
>> - *  DD1    52-28 = 24, 0b11000
>> - *  Others 52-31 = 21, 0b10101
>> + * bits 52 - 31 = 21, 0b10101
>>   * RTS encoding details
>>   * bits 0 - 3 of rts -> bits 6 - 8 unsigned long
>>   * bits 4 - 5 of rts -> bits 62 - 63 of unsigned long
>>   */
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - rts_field = (0x3UL << 61);
>> - else {
>> - rts_field = (0x5UL << 5); /* 6 - 8 bits */
>> - rts_field |= (0x2UL << 61);
>> - }
>> + rts_field = (0x5UL << 5); /* 6 - 8 bits */
>> + rts_field |= (0x2UL << 61);
>> +
>>   return rts_field;
>>  }
>>  
>> diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h b/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
>> index 6a9e680..a0fe684 100644
>> --- a/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
>> +++ b/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
>> @@ -45,6 +45,4 @@ extern void radix__flush_tlb_lpid_va(unsigned long lpid, unsigned long gpa,
>>       unsigned long page_size);
>>  extern void radix__flush_tlb_lpid(unsigned long lpid);
>>  extern void radix__flush_tlb_all(void);
>> -extern void radix__flush_tlb_pte_p9_dd1(unsigned long old_pte, struct mm_struct *mm,
>> - unsigned long address);
>>  #endif
>> diff --git a/arch/powerpc/include/asm/cputable.h b/arch/powerpc/include/asm/cputable.h
>> index 82ca727..aab3b68 100644
>> --- a/arch/powerpc/include/asm/cputable.h
>> +++ b/arch/powerpc/include/asm/cputable.h
>> @@ -222,7 +222,6 @@ enum {
>>  #define CPU_FTR_DAWR LONG_ASM_CONST(0x0000008000000000)
>>  #define CPU_FTR_DABRX LONG_ASM_CONST(0x0000010000000000)
>>  #define CPU_FTR_PMAO_BUG LONG_ASM_CONST(0x0000020000000000)
>> -#define CPU_FTR_POWER9_DD1 LONG_ASM_CONST(0x0000040000000000)
>>  #define CPU_FTR_POWER9_DD2_1 LONG_ASM_CONST(0x0000080000000000)
>>  #define CPU_FTR_P9_TM_HV_ASSIST LONG_ASM_CONST(0x0000100000000000)
>>  #define CPU_FTR_P9_TM_XER_SO_BUG LONG_ASM_CONST(0x0000200000000000)
>> @@ -480,8 +479,6 @@ enum {
>>      CPU_FTR_DBELL | CPU_FTR_HAS_PPR | CPU_FTR_DAWR | \
>>      CPU_FTR_ARCH_207S | CPU_FTR_TM_COMP | CPU_FTR_ARCH_300 | \
>>      CPU_FTR_P9_TLBIE_BUG | CPU_FTR_P9_TIDR)
>> -#define CPU_FTRS_POWER9_DD1 ((CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD1) & \
>> -     (~CPU_FTR_SAO))
>>  #define CPU_FTRS_POWER9_DD2_0 CPU_FTRS_POWER9
>>  #define CPU_FTRS_POWER9_DD2_1 (CPU_FTRS_POWER9 | CPU_FTR_POWER9_DD2_1)
>>  #define CPU_FTRS_POWER9_DD2_2 (CPU_FTRS_POWER9 | CPU_FTR_P9_TM_HV_ASSIST | \
>> @@ -505,8 +502,7 @@ enum {
>>       CPU_FTRS_POWER6 | CPU_FTRS_POWER7 | CPU_FTRS_POWER8E | \
>>       CPU_FTRS_POWER8 | CPU_FTRS_POWER8_DD1 | CPU_FTRS_CELL | \
>>       CPU_FTRS_PA6T | CPU_FTR_VSX | CPU_FTRS_POWER9 | \
>> -     CPU_FTRS_POWER9_DD1 | CPU_FTRS_POWER9_DD2_1 | \
>> -     CPU_FTRS_POWER9_DD2_2)
>> +     CPU_FTRS_POWER9_DD2_1 | CPU_FTRS_POWER9_DD2_2)
>>  #endif
>>  #else
>>  enum {
>> diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
>> index b3ec196..da6a25f 100644
>> --- a/arch/powerpc/include/asm/paca.h
>> +++ b/arch/powerpc/include/asm/paca.h
>> @@ -184,11 +184,6 @@ struct paca_struct {
>>   u8 subcore_sibling_mask;
>>   /* Flag to request this thread not to stop */
>>   atomic_t dont_stop;
>> - /*
>> - * Pointer to an array which contains pointer
>> - * to the sibling threads' paca.
>> - */
>> - struct paca_struct **thread_sibling_pacas;
>>   /* The PSSCR value that the kernel requested before going to stop */
>>   u64 requested_psscr;
>>  
>> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
>> index a65c54c..7e1cbc8 100644
>> --- a/arch/powerpc/kernel/asm-offsets.c
>> +++ b/arch/powerpc/kernel/asm-offsets.c
>> @@ -754,7 +754,6 @@ int main(void)
>>   OFFSET(PACA_THREAD_IDLE_STATE, paca_struct, thread_idle_state);
>>   OFFSET(PACA_THREAD_MASK, paca_struct, thread_mask);
>>   OFFSET(PACA_SUBCORE_SIBLING_MASK, paca_struct, subcore_sibling_mask);
>> - OFFSET(PACA_SIBLING_PACA_PTRS, paca_struct, thread_sibling_pacas);
>>   OFFSET(PACA_REQ_PSSCR, paca_struct, requested_psscr);
>>   OFFSET(PACA_DONT_STOP, paca_struct, dont_stop);
>>  #define STOP_SPR(x, f) OFFSET(x, paca_struct, stop_sprs.f)
>> diff --git a/arch/powerpc/kernel/cputable.c b/arch/powerpc/kernel/cputable.c
>> index bc2b461..13acd1c 100644
>> --- a/arch/powerpc/kernel/cputable.c
>> +++ b/arch/powerpc/kernel/cputable.c
>> @@ -527,26 +527,6 @@ static struct cpu_spec __initdata cpu_specs[] = {
>>   .machine_check_early = __machine_check_early_realmode_p8,
>>   .platform = "power8",
>>   },
>> - { /* Power9 DD1*/
>> - .pvr_mask = 0xffffff00,
>> - .pvr_value = 0x004e0100,
>> - .cpu_name = "POWER9 (raw)",
>> - .cpu_features = CPU_FTRS_POWER9_DD1,
>> - .cpu_user_features = COMMON_USER_POWER9,
>> - .cpu_user_features2 = COMMON_USER2_POWER9,
>> - .mmu_features = MMU_FTRS_POWER9,
>> - .icache_bsize = 128,
>> - .dcache_bsize = 128,
>> - .num_pmcs = 6,
>> - .pmc_type = PPC_PMC_IBM,
>> - .oprofile_cpu_type = "ppc64/power9",
>> - .oprofile_type = PPC_OPROFILE_INVALID,
>> - .cpu_setup = __setup_cpu_power9,
>> - .cpu_restore = __restore_cpu_power9,
>> - .flush_tlb = __flush_tlb_power9,
>> - .machine_check_early = __machine_check_early_realmode_p9,
>> - .platform = "power9",
>> - },
>>   { /* Power9 DD2.0 */
>>   .pvr_mask = 0xffffefff,
>>   .pvr_value = 0x004e0200,
>> diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
>> index fa7f063..350ea04 100644
>> --- a/arch/powerpc/kernel/dt_cpu_ftrs.c
>> +++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
>> @@ -741,13 +741,16 @@ static __init void cpufeatures_cpu_quirks(void)
>>   /*
>>   * Not all quirks can be derived from the cpufeatures device tree.
>>   */
>> - if ((version & 0xffffff00) == 0x004e0100)
>> - cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD1;
>> + if ((version & 0xffffefff) == 0x004e0200)
>> + ; /* DD2.0 has no feature flag */
>>   else if ((version & 0xffffefff) == 0x004e0201)
>>   cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1;
>> - else if ((version & 0xffffefff) == 0x004e0202)
>> - cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_HV_ASSIST |
>> - CPU_FTR_P9_TM_XER_SO_BUG;
>> + else if ((version & 0xffffefff) == 0x004e0202) {
>> + cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_HV_ASSIST;
>> + cur_cpu_spec->cpu_features |= CPU_FTR_P9_TM_XER_SO_BUG;
>> + cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1;
>> + } else /* DD2.1 and up have DD2_1 */
>> + cur_cpu_spec->cpu_features |= CPU_FTR_POWER9_DD2_1;
>>  
>>   if ((version & 0xffff0000) == 0x004e0000) {
>>   cur_cpu_spec->cpu_features |= CPU_FTR_P9_TLBIE_BUG;
>> diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
>> index 59f5cfa..724bd35 100644
>> --- a/arch/powerpc/kernel/exceptions-64s.S
>> +++ b/arch/powerpc/kernel/exceptions-64s.S
>> @@ -276,9 +276,7 @@ BEGIN_FTR_SECTION
>>   *
>>   * This interrupt can wake directly from idle. If that is the case,
>>   * the machine check is handled then the idle wakeup code is called
>> - * to restore state. In that case, the POWER9 DD1 idle PACA workaround
>> - * is not applied in the early machine check code, which will cause
>> - * bugs.
>> + * to restore state.
>>   */
>>   mr r11,r1 /* Save r1 */
>>   lhz r10,PACA_IN_MCE(r13)
>> diff --git a/arch/powerpc/kernel/idle_book3s.S b/arch/powerpc/kernel/idle_book3s.S
>> index f3ac31c..49439fc 100644
>> --- a/arch/powerpc/kernel/idle_book3s.S
>> +++ b/arch/powerpc/kernel/idle_book3s.S
>> @@ -455,43 +455,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_P9_TM_XER_SO_BUG)
>>   blr /* return 0 for wakeup cause / SRR1 value */
>>  
>>  /*
>> - * On waking up from stop 0,1,2 with ESL=1 on POWER9 DD1,
>> - * HSPRG0 will be set to the HSPRG0 value of one of the
>> - * threads in this core. Thus the value we have in r13
>> - * may not be this thread's paca pointer.
>> - *
>> - * Fortunately, the TIR remains invariant. Since this thread's
>> - * paca pointer is recorded in all its sibling's paca, we can
>> - * correctly recover this thread's paca pointer if we
>> - * know the index of this thread in the core.
>> - *
>> - * This index can be obtained from the TIR.
>> - *
>> - * i.e, thread's position in the core = TIR.
>> - * If this value is i, then this thread's paca is
>> - * paca->thread_sibling_pacas[i].
>> - */
>> -power9_dd1_recover_paca:
>> - mfspr r4, SPRN_TIR
>> - /*
>> - * Since each entry in thread_sibling_pacas is 8 bytes
>> - * we need to left-shift by 3 bits. Thus r4 = i * 8
>> - */
>> - sldi r4, r4, 3
>> - /* Get &paca->thread_sibling_pacas[0] in r5 */
>> - ld r5, PACA_SIBLING_PACA_PTRS(r13)
>> - /* Load paca->thread_sibling_pacas[i] into r13 */
>> - ldx r13, r4, r5
>> - SET_PACA(r13)
>> - /*
>> - * Indicate that we have lost NVGPR state
>> - * which needs to be restored from the stack.
>> - */
>> - li r3, 1
>> - stb r3,PACA_NAPSTATELOST(r13)
>> - blr
>> -
>> -/*
>>   * Called from machine check handler for powersave wakeups.
>>   * Low level machine check processing has already been done. Now just
>>   * go through the wake up path to get everything in order.
>> @@ -525,9 +488,6 @@ pnv_powersave_wakeup:
>>   ld r2, PACATOC(r13)
>>  
>>  BEGIN_FTR_SECTION
>> -BEGIN_FTR_SECTION_NESTED(70)
>> - bl power9_dd1_recover_paca
>> -END_FTR_SECTION_NESTED_IFSET(CPU_FTR_POWER9_DD1, 70)
>>   bl pnv_restore_hyp_resource_arch300
>>  FTR_SECTION_ELSE
>>   bl pnv_restore_hyp_resource_arch207
>> @@ -587,22 +547,12 @@ END_FTR_SECTION_IFCLR(CPU_FTR_POWER9_DD2_1)
>>   LOAD_REG_ADDRBASE(r5,pnv_first_deep_stop_state)
>>   ld r4,ADDROFF(pnv_first_deep_stop_state)(r5)
>>  
>> -BEGIN_FTR_SECTION_NESTED(71)
>> - /*
>> - * Assume that we are waking up from the state
>> - * same as the Requested Level (RL) in the PSSCR
>> - * which are Bits 60-63
>> - */
>> - ld r5,PACA_REQ_PSSCR(r13)
>> - rldicl  r5,r5,0,60
>> -FTR_SECTION_ELSE_NESTED(71)
>>   /*
>>   * 0-3 bits correspond to Power-Saving Level Status
>>   * which indicates the idle state we are waking up from
>>   */
>>   mfspr r5, SPRN_PSSCR
>>   rldicl  r5,r5,4,60
>> -ALT_FTR_SECTION_END_NESTED_IFSET(CPU_FTR_POWER9_DD1, 71)
>>   li r0, 0 /* clear requested_psscr to say we're awake */
>>   std r0, PACA_REQ_PSSCR(r13)
>>   cmpd cr4,r5,r4
>> diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
>> index 83478a9..e73a80d 100644
>> --- a/arch/powerpc/kernel/process.c
>> +++ b/arch/powerpc/kernel/process.c
>> @@ -1247,17 +1247,9 @@ struct task_struct *__switch_to(struct task_struct *prev,
>>   * mappings. If the new process has the foreign real address
>>   * mappings, we must issue a cp_abort to clear any state and
>>   * prevent snooping, corruption or a covert channel.
>> - *
>> - * DD1 allows paste into normal system memory so we do an
>> - * unpaired copy, rather than cp_abort, to clear the buffer,
>> - * since cp_abort is quite expensive.
>>   */
>> - if (current_thread_info()->task->thread.used_vas) {
>> + if (current_thread_info()->task->thread.used_vas)
>>   asm volatile(PPC_CP_ABORT);
>> - } else if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
>> - asm volatile(PPC_COPY(%0, %1)
>> - : : "r"(dummy_copy_buffer), "r"(0));
>> - }
>>   }
>>  #endif /* CONFIG_PPC_BOOK3S_64 */
>>  
>> diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
>> index 5d9bafe..dd8980f 100644
>> --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
>> +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
>> @@ -66,10 +66,7 @@ int kvmppc_mmu_radix_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
>>   bits = root & RPDS_MASK;
>>   root = root & RPDB_MASK;
>>  
>> - /* P9 DD1 interprets RTS (radix tree size) differently */
>>   offset = rts + 31;
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - offset -= 3;
>>  
>>   /* current implementations only support 52-bit space */
>>   if (offset != 52)
>> @@ -167,17 +164,7 @@ unsigned long kvmppc_radix_update_pte(struct kvm *kvm, pte_t *ptep,
>>        unsigned long clr, unsigned long set,
>>        unsigned long addr, unsigned int shift)
>>  {
>> - unsigned long old = 0;
>> -
>> - if (!(clr & _PAGE_PRESENT) && cpu_has_feature(CPU_FTR_POWER9_DD1) &&
>> -    pte_present(*ptep)) {
>> - /* have to invalidate it first */
>> - old = __radix_pte_update(ptep, _PAGE_PRESENT, 0);
>> - kvmppc_radix_tlbie_page(kvm, addr, shift);
>> - set |= _PAGE_PRESENT;
>> - old &= _PAGE_PRESENT;
>> - }
>> - return __radix_pte_update(ptep, clr, set) | old;
>> + return __radix_pte_update(ptep, clr, set);
>>  }
>>  
>>  void kvmppc_radix_set_pte_at(struct kvm *kvm, unsigned long addr,
>> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
>> index dc9eb6b..51278f8 100644
>> --- a/arch/powerpc/kvm/book3s_hv.c
>> +++ b/arch/powerpc/kvm/book3s_hv.c
>> @@ -1662,14 +1662,6 @@ static int kvmppc_set_one_reg_hv(struct kvm_vcpu *vcpu, u64 id,
>>   r = set_vpa(vcpu, &vcpu->arch.dtl, addr, len);
>>   break;
>>   case KVM_REG_PPC_TB_OFFSET:
>> - /*
>> - * POWER9 DD1 has an erratum where writing TBU40 causes
>> - * the timebase to lose ticks.  So we don't let the
>> - * timebase offset be changed on P9 DD1.  (It is
>> - * initialized to zero.)
>> - */
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - break;
>>   /* round up to multiple of 2^24 */
>>   vcpu->arch.vcore->tb_offset =
>>   ALIGN(set_reg_val(id, *val), 1UL << 24);
>> @@ -1987,8 +1979,6 @@ static struct kvm_vcpu *kvmppc_core_vcpu_create_hv(struct kvm *kvm,
>>   /*
>>   * Set the default HFSCR for the guest from the host value.
>>   * This value is only used on POWER9.
>> - * On POWER9 DD1, TM doesn't work, so we make sure to
>> - * prevent the guest from using it.
>>   * On POWER9, we want to virtualize the doorbell facility, so we
>>   * turn off the HFSCR bit, which causes those instructions to trap.
>>   */
>> diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> index 293a659..1c35836 100644
>> --- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> +++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
>> @@ -907,9 +907,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
>>   mtspr SPRN_PID, r7
>>   mtspr SPRN_WORT, r8
>>  BEGIN_FTR_SECTION
>> - PPC_INVALIDATE_ERAT
>> -END_FTR_SECTION_IFSET(CPU_FTR_POWER9_DD1)
>> -BEGIN_FTR_SECTION
>>   /* POWER8-only registers */
>>   ld r5, VCPU_TCSCR(r4)
>>   ld r6, VCPU_ACOP(r4)
>> @@ -1849,7 +1846,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
>>   ld r5, VCPU_KVM(r9)
>>   lbz r0, KVM_RADIX(r5)
>>   cmpwi cr2, r0, 0
>> - beq cr2, 4f
>> + beq cr2, 2f
>>  
>>   /* Radix: Handle the case where the guest used an illegal PID */
>>   LOAD_REG_ADDR(r4, mmu_base_pid)
>> @@ -1881,11 +1878,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
>>   bdnz 1b
>>   ptesync
>>  
>> -2: /* Flush the ERAT on radix P9 DD1 guest exit */
>> -BEGIN_FTR_SECTION
>> - PPC_INVALIDATE_ERAT
>> -END_FTR_SECTION_IFSET(CPU_FTR_POWER9_DD1)
>> -4:
>> +2:
>>  #endif /* CONFIG_PPC_RADIX_MMU */
>>  
>>   /*
>> @@ -3432,11 +3425,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300)
>>   mtspr SPRN_CIABR, r0
>>   mtspr SPRN_DAWRX, r0
>>  
>> - /* Flush the ERAT on radix P9 DD1 guest exit */
>> -BEGIN_FTR_SECTION
>> - PPC_INVALIDATE_ERAT
>> -END_FTR_SECTION_IFSET(CPU_FTR_POWER9_DD1)
>> -
>>  BEGIN_MMU_FTR_SECTION
>>   b 4f
>>  END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_RADIX)
>> diff --git a/arch/powerpc/kvm/book3s_xive_template.c b/arch/powerpc/kvm/book3s_xive_template.c
>> index c7a5dea..3191961 100644
>> --- a/arch/powerpc/kvm/book3s_xive_template.c
>> +++ b/arch/powerpc/kvm/book3s_xive_template.c
>> @@ -22,18 +22,6 @@ static void GLUE(X_PFX,ack_pending)(struct kvmppc_xive_vcpu *xc)
>>   */
>>   eieio();
>>  
>> - /*
>> - * DD1 bug workaround: If PIPR is less favored than CPPR
>> - * ignore the interrupt or we might incorrectly lose an IPB
>> - * bit.
>> - */
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
>> - __be64 qw1 = __x_readq(__x_tima + TM_QW1_OS);
>> - u8 pipr = be64_to_cpu(qw1) & 0xff;
>> - if (pipr >= xc->hw_cppr)
>> - return;
>> - }
>> -
>>   /* Perform the acknowledge OS to register cycle. */
>>   ack = be16_to_cpu(__x_readw(__x_tima + TM_SPC_ACK_OS_REG));
>>  
>> @@ -86,8 +74,15 @@ static void GLUE(X_PFX,source_eoi)(u32 hw_irq, struct xive_irq_data *xd)
>>   /* If the XIVE supports the new "store EOI facility, use it */
>>   if (xd->flags & XIVE_IRQ_FLAG_STORE_EOI)
>>   __x_writeq(0, __x_eoi_page(xd) + XIVE_ESB_STORE_EOI);
>> - else if (hw_irq && xd->flags & XIVE_IRQ_FLAG_EOI_FW) {
>> + else if (hw_irq && xd->flags & XIVE_IRQ_FLAG_EOI_FW)
>>   opal_int_eoi(hw_irq);
>> + else if (xd->flags & XIVE_IRQ_FLAG_LSI) {
>> + /*
>> + * For LSIs the HW EOI cycle is used rather than PQ bits,
>> + * as they are automatically re-triggred in HW when still
>> + * pending.
>> + */
>> + __x_readq(__x_eoi_page(xd) + XIVE_ESB_LOAD_EOI);
>>   } else {
>>   uint64_t eoi_val;
>>  
>> @@ -99,20 +94,12 @@ static void GLUE(X_PFX,source_eoi)(u32 hw_irq, struct xive_irq_data *xd)
>>   *
>>   * This allows us to then do a re-trigger if Q was set
>>   * rather than synthetizing an interrupt in software
>> - *
>> - * For LSIs, using the HW EOI cycle works around a problem
>> - * on P9 DD1 PHBs where the other ESB accesses don't work
>> - * properly.
>>   */
>> - if (xd->flags & XIVE_IRQ_FLAG_LSI)
>> - __x_readq(__x_eoi_page(xd) + XIVE_ESB_LOAD_EOI);
>> - else {
>> - eoi_val = GLUE(X_PFX,esb_load)(xd, XIVE_ESB_SET_PQ_00);
>> -
>> - /* Re-trigger if needed */
>> - if ((eoi_val & 1) && __x_trig_page(xd))
>> - __x_writeq(0, __x_trig_page(xd));
>> - }
>> + eoi_val = GLUE(X_PFX,esb_load)(xd, XIVE_ESB_SET_PQ_00);
>> +
>> + /* Re-trigger if needed */
>> + if ((eoi_val & 1) && __x_trig_page(xd))
>> + __x_writeq(0, __x_trig_page(xd));
>>   }
>>  }
>>  
>> diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
>> index db84680..06574b4 100644
>> --- a/arch/powerpc/mm/hash_utils_64.c
>> +++ b/arch/powerpc/mm/hash_utils_64.c
>> @@ -802,31 +802,6 @@ int hash__remove_section_mapping(unsigned long start, unsigned long end)
>>  }
>>  #endif /* CONFIG_MEMORY_HOTPLUG */
>>  
>> -static void update_hid_for_hash(void)
>> -{
>> - unsigned long hid0;
>> - unsigned long rb = 3UL << PPC_BITLSHIFT(53); /* IS = 3 */
>> -
>> - asm volatile("ptesync": : :"memory");
>> - /* prs = 0, ric = 2, rs = 0, r = 1 is = 3 */
>> - asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
>> -     : : "r"(rb), "i"(0), "i"(0), "i"(2), "r"(0) : "memory");
>> - asm volatile("eieio; tlbsync; ptesync; isync; slbia": : :"memory");
>> - trace_tlbie(0, 0, rb, 0, 2, 0, 0);
>> -
>> - /*
>> - * now switch the HID
>> - */
>> - hid0  = mfspr(SPRN_HID0);
>> - hid0 &= ~HID0_POWER9_RADIX;
>> - mtspr(SPRN_HID0, hid0);
>> - asm volatile("isync": : :"memory");
>> -
>> - /* Wait for it to happen */
>> - while ((mfspr(SPRN_HID0) & HID0_POWER9_RADIX))
>> - cpu_relax();
>> -}
>> -
>>  static void __init hash_init_partition_table(phys_addr_t hash_table,
>>       unsigned long htab_size)
>>  {
>> @@ -839,8 +814,6 @@ static void __init hash_init_partition_table(phys_addr_t hash_table,
>>   htab_size =  __ilog2(htab_size) - 18;
>>   mmu_partition_table_set_entry(0, hash_table | htab_size, 0);
>>   pr_info("Partition table %p\n", partition_tb);
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - update_hid_for_hash();
>>  }
>>  
>>  static void __init htab_initialize(void)
>> @@ -1063,9 +1036,6 @@ void hash__early_init_mmu_secondary(void)
>>   /* Initialize hash table for that CPU */
>>   if (!firmware_has_feature(FW_FEATURE_LPAR)) {
>>  
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - update_hid_for_hash();
>> -
>>   if (!cpu_has_feature(CPU_FTR_ARCH_300))
>>   mtspr(SPRN_SDR1, _SDR1);
>>   else
>> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
>> index 79e1378..6f7b831 100644
>> --- a/arch/powerpc/mm/hugetlbpage.c
>> +++ b/arch/powerpc/mm/hugetlbpage.c
>> @@ -609,15 +609,12 @@ static int __init add_huge_page_size(unsigned long long size)
>>   * firmware we only add hugetlb support for page sizes that can be
>>   * supported by linux page table layout.
>>   * For now we have
>> - * Radix: 2M
>> + * Radix: 2M and 1G
>>   * Hash: 16M and 16G
>>   */
>>   if (radix_enabled()) {
>> - if (mmu_psize != MMU_PAGE_2M) {
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1) ||
>> -    (mmu_psize != MMU_PAGE_1G))
>> - return -EINVAL;
>> - }
>> + if (mmu_psize != MMU_PAGE_2M && mmu_psize != MMU_PAGE_1G)
>> + return -EINVAL;
>>   } else {
>>   if (mmu_psize != MMU_PAGE_16M && mmu_psize != MMU_PAGE_16G)
>>   return -EINVAL;
>> diff --git a/arch/powerpc/mm/mmu_context_book3s64.c b/arch/powerpc/mm/mmu_context_book3s64.c
>> index 5066276..208f687 100644
>> --- a/arch/powerpc/mm/mmu_context_book3s64.c
>> +++ b/arch/powerpc/mm/mmu_context_book3s64.c
>> @@ -250,15 +250,7 @@ void arch_exit_mmap(struct mm_struct *mm)
>>  #ifdef CONFIG_PPC_RADIX_MMU
>>  void radix__switch_mmu_context(struct mm_struct *prev, struct mm_struct *next)
>>  {
>> -
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
>> - isync();
>> - mtspr(SPRN_PID, next->context.id);
>> - isync();
>> - asm volatile(PPC_INVALIDATE_ERAT : : :"memory");
>> - } else {
>> - mtspr(SPRN_PID, next->context.id);
>> - isync();
>> - }
>> + mtspr(SPRN_PID, next->context.id);
>> + isync();
>>  }
>>  #endif
>> diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
>> index a778560..704362d 100644
>> --- a/arch/powerpc/mm/pgtable-radix.c
>> +++ b/arch/powerpc/mm/pgtable-radix.c
>> @@ -171,16 +171,6 @@ void radix__mark_rodata_ro(void)
>>  {
>>   unsigned long start, end;
>>  
>> - /*
>> - * mark_rodata_ro() will mark itself as !writable at some point.
>> - * Due to DD1 workaround in radix__pte_update(), we'll end up with
>> - * an invalid pte and the system will crash quite severly.
>> - */
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
>> - pr_warn("Warning: Unable to mark rodata read only on P9 DD1\n");
>> - return;
>> - }
>> -
>>   start = (unsigned long)_stext;
>>   end = (unsigned long)__init_begin;
>>  
>> @@ -470,35 +460,6 @@ void __init radix__early_init_devtree(void)
>>   return;
>>  }
>>  
>> -static void update_hid_for_radix(void)
>> -{
>> - unsigned long hid0;
>> - unsigned long rb = 3UL << PPC_BITLSHIFT(53); /* IS = 3 */
>> -
>> - asm volatile("ptesync": : :"memory");
>> - /* prs = 0, ric = 2, rs = 0, r = 1 is = 3 */
>> - asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
>> -     : : "r"(rb), "i"(1), "i"(0), "i"(2), "r"(0) : "memory");
>> - /* prs = 1, ric = 2, rs = 0, r = 1 is = 3 */
>> - asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
>> -     : : "r"(rb), "i"(1), "i"(1), "i"(2), "r"(0) : "memory");
>> - asm volatile("eieio; tlbsync; ptesync; isync; slbia": : :"memory");
>> - trace_tlbie(0, 0, rb, 0, 2, 0, 1);
>> - trace_tlbie(0, 0, rb, 0, 2, 1, 1);
>> -
>> - /*
>> - * now switch the HID
>> - */
>> - hid0  = mfspr(SPRN_HID0);
>> - hid0 |= HID0_POWER9_RADIX;
>> - mtspr(SPRN_HID0, hid0);
>> - asm volatile("isync": : :"memory");
>> -
>> - /* Wait for it to happen */
>> - while (!(mfspr(SPRN_HID0) & HID0_POWER9_RADIX))
>> - cpu_relax();
>> -}
>> -
>>  static void radix_init_amor(void)
>>  {
>>   /*
>> @@ -513,22 +474,12 @@ static void radix_init_amor(void)
>>  
>>  static void radix_init_iamr(void)
>>  {
>> - unsigned long iamr;
>> -
>> - /*
>> - * The IAMR should set to 0 on DD1.
>> - */
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - iamr = 0;
>> - else
>> - iamr = (1ul << 62);
>> -
>>   /*
>>   * Radix always uses key0 of the IAMR to determine if an access is
>>   * allowed. We set bit 0 (IBM bit 1) of key0, to prevent instruction
>>   * fetch.
>>   */
>> - mtspr(SPRN_IAMR, iamr);
>> + mtspr(SPRN_IAMR, (1ul << 62));
>>  }
>>  
>>  void __init radix__early_init_mmu(void)
>> @@ -583,8 +534,6 @@ void __init radix__early_init_mmu(void)
>>  
>>   if (!firmware_has_feature(FW_FEATURE_LPAR)) {
>>   radix_init_native();
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - update_hid_for_radix();
>>   lpcr = mfspr(SPRN_LPCR);
>>   mtspr(SPRN_LPCR, lpcr | LPCR_UPRT | LPCR_HR);
>>   radix_init_partition_table();
>> @@ -608,10 +557,6 @@ void radix__early_init_mmu_secondary(void)
>>   * update partition table control register and UPRT
>>   */
>>   if (!firmware_has_feature(FW_FEATURE_LPAR)) {
>> -
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - update_hid_for_radix();
>> -
>>   lpcr = mfspr(SPRN_LPCR);
>>   mtspr(SPRN_LPCR, lpcr | LPCR_UPRT | LPCR_HR);
>>  
>> @@ -1029,8 +974,7 @@ void radix__ptep_set_access_flags(struct vm_area_struct *vma, pte_t *ptep,
>>   * To avoid NMMU hang while relaxing access, we need mark
>>   * the pte invalid in between.
>>   */
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1) ||
>> -    atomic_read(&mm->context.copros) > 0) {
>> + if (atomic_read(&mm->context.copros) > 0) {
>>   unsigned long old_pte, new_pte;
>>  
>>   old_pte = __radix_pte_update(ptep, ~0, 0);
>> diff --git a/arch/powerpc/mm/tlb-radix.c b/arch/powerpc/mm/tlb-radix.c
>> index c07c2f0..b0cad4f 100644
>> --- a/arch/powerpc/mm/tlb-radix.c
>> +++ b/arch/powerpc/mm/tlb-radix.c
>> @@ -658,24 +658,6 @@ void radix__flush_tlb_all(void)
>>   asm volatile("eieio; tlbsync; ptesync": : :"memory");
>>  }
>>  
>> -void radix__flush_tlb_pte_p9_dd1(unsigned long old_pte, struct mm_struct *mm,
>> - unsigned long address)
>> -{
>> - /*
>> - * We track page size in pte only for DD1, So we can
>> - * call this only on DD1.
>> - */
>> - if (!cpu_has_feature(CPU_FTR_POWER9_DD1)) {
>> - VM_WARN_ON(1);
>> - return;
>> - }
>> -
>> - if (old_pte & R_PAGE_LARGE)
>> - radix__flush_tlb_page_psize(mm, address, MMU_PAGE_2M);
>> - else
>> - radix__flush_tlb_page_psize(mm, address, mmu_virtual_psize);
>> -}
>> -
>>  #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
>>  extern void radix_kvm_prefetch_workaround(struct mm_struct *mm)
>>  {
>> diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
>> index b7a6044..8ce6673 100644
>> --- a/arch/powerpc/perf/core-book3s.c
>> +++ b/arch/powerpc/perf/core-book3s.c
>> @@ -128,10 +128,6 @@ static inline void power_pmu_bhrb_disable(struct perf_event *event) {}
>>  static void power_pmu_sched_task(struct perf_event_context *ctx, bool sched_in) {}
>>  static inline void power_pmu_bhrb_read(struct cpu_hw_events *cpuhw) {}
>>  static void pmao_restore_workaround(bool ebb) { }
>> -static bool use_ic(u64 event)
>> -{
>> - return false;
>> -}
>>  #endif /* CONFIG_PPC32 */
>>  
>>  static bool regs_use_siar(struct pt_regs *regs)
>> @@ -710,14 +706,6 @@ static void pmao_restore_workaround(bool ebb)
>>   mtspr(SPRN_PMC6, pmcs[5]);
>>  }
>>  
>> -static bool use_ic(u64 event)
>> -{
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1) &&
>> - (event == 0x200f2 || event == 0x300f2))
>> - return true;
>> -
>> - return false;
>> -}
>>  #endif /* CONFIG_PPC64 */
>>  
>>  static void perf_event_interrupt(struct pt_regs *regs);
>> @@ -1042,7 +1030,6 @@ static u64 check_and_compute_delta(u64 prev, u64 val)
>>  static void power_pmu_read(struct perf_event *event)
>>  {
>>   s64 val, delta, prev;
>> - struct cpu_hw_events *cpuhw = this_cpu_ptr(&cpu_hw_events);
>>  
>>   if (event->hw.state & PERF_HES_STOPPED)
>>   return;
>> @@ -1052,13 +1039,6 @@ static void power_pmu_read(struct perf_event *event)
>>  
>>   if (is_ebb_event(event)) {
>>   val = read_pmc(event->hw.idx);
>> - if (use_ic(event->attr.config)) {
>> - val = mfspr(SPRN_IC);
>> - if (val > cpuhw->ic_init)
>> - val = val - cpuhw->ic_init;
>> - else
>> - val = val + (0 - cpuhw->ic_init);
>> - }
>>   local64_set(&event->hw.prev_count, val);
>>   return;
>>   }
>> @@ -1072,13 +1052,6 @@ static void power_pmu_read(struct perf_event *event)
>>   prev = local64_read(&event->hw.prev_count);
>>   barrier();
>>   val = read_pmc(event->hw.idx);
>> - if (use_ic(event->attr.config)) {
>> - val = mfspr(SPRN_IC);
>> - if (val > cpuhw->ic_init)
>> - val = val - cpuhw->ic_init;
>> - else
>> - val = val + (0 - cpuhw->ic_init);
>> - }
>>   delta = check_and_compute_delta(prev, val);
>>   if (!delta)
>>   return;
>> @@ -1531,13 +1504,6 @@ static int power_pmu_add(struct perf_event *event, int ef_flags)
>>   event->attr.branch_sample_type);
>>   }
>>  
>> - /*
>> - * Workaround for POWER9 DD1 to use the Instruction Counter
>> - * register value for instruction counting
>> - */
>> - if (use_ic(event->attr.config))
>> - cpuhw->ic_init = mfspr(SPRN_IC);
>> -
>>   perf_pmu_enable(event->pmu);
>>   local_irq_restore(flags);
>>   return ret;
>> diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c
>> index 2efee3f..177de81 100644
>> --- a/arch/powerpc/perf/isa207-common.c
>> +++ b/arch/powerpc/perf/isa207-common.c
>> @@ -59,7 +59,7 @@ static bool is_event_valid(u64 event)
>>  {
>>   u64 valid_mask = EVENT_VALID_MASK;
>>  
>> - if (cpu_has_feature(CPU_FTR_ARCH_300) && !cpu_has_feature(CPU_FTR_POWER9_DD1))
>> + if (cpu_has_feature(CPU_FTR_ARCH_300))
>>   valid_mask = p9_EVENT_VALID_MASK;
>>  
>>   return !(event & ~valid_mask);
>> @@ -86,8 +86,6 @@ static void mmcra_sdar_mode(u64 event, unsigned long *mmcra)
>>   * Incase of Power9:
>>   * Marked event: MMCRA[SDAR_MODE] will be set to 0b00 ('No Updates'),
>>   *               or if group already have any marked events.
>> - * Non-Marked events (for DD1):
>> - * MMCRA[SDAR_MODE] will be set to 0b01
>>   * For rest
>>   * MMCRA[SDAR_MODE] will be set from event code.
>>   *      If sdar_mode from event is zero, default to 0b01. Hardware
>> @@ -96,7 +94,7 @@ static void mmcra_sdar_mode(u64 event, unsigned long *mmcra)
>>   if (cpu_has_feature(CPU_FTR_ARCH_300)) {
>>   if (is_event_marked(event) || (*mmcra & MMCRA_SAMPLE_ENABLE))
>>   *mmcra &= MMCRA_SDAR_MODE_NO_UPDATES;
>> - else if (!cpu_has_feature(CPU_FTR_POWER9_DD1) && p9_SDAR_MODE(event))
>> + else if (p9_SDAR_MODE(event))
>>   *mmcra |=  p9_SDAR_MODE(event) << MMCRA_SDAR_MODE_SHIFT;
>>   else
>>   *mmcra |= MMCRA_SDAR_MODE_DCACHE;
>> @@ -106,7 +104,7 @@ static void mmcra_sdar_mode(u64 event, unsigned long *mmcra)
>>  
>>  static u64 thresh_cmp_val(u64 value)
>>  {
>> - if (cpu_has_feature(CPU_FTR_ARCH_300) && !cpu_has_feature(CPU_FTR_POWER9_DD1))
>> + if (cpu_has_feature(CPU_FTR_ARCH_300))
>>   return value << p9_MMCRA_THR_CMP_SHIFT;
>>  
>>   return value << MMCRA_THR_CMP_SHIFT;
>> @@ -114,7 +112,7 @@ static u64 thresh_cmp_val(u64 value)
>>  
>>  static unsigned long combine_from_event(u64 event)
>>  {
>> - if (cpu_has_feature(CPU_FTR_ARCH_300) && !cpu_has_feature(CPU_FTR_POWER9_DD1))
>> + if (cpu_has_feature(CPU_FTR_ARCH_300))
>>   return p9_EVENT_COMBINE(event);
>>  
>>   return EVENT_COMBINE(event);
>> @@ -122,7 +120,7 @@ static unsigned long combine_from_event(u64 event)
>>  
>>  static unsigned long combine_shift(unsigned long pmc)
>>  {
>> - if (cpu_has_feature(CPU_FTR_ARCH_300) && !cpu_has_feature(CPU_FTR_POWER9_DD1))
>> + if (cpu_has_feature(CPU_FTR_ARCH_300))
>>   return p9_MMCR1_COMBINE_SHIFT(pmc);
>>  
>>   return MMCR1_COMBINE_SHIFT(pmc);
>> diff --git a/arch/powerpc/perf/isa207-common.h b/arch/powerpc/perf/isa207-common.h
>> index 6c737d6..479dec2 100644
>> --- a/arch/powerpc/perf/isa207-common.h
>> +++ b/arch/powerpc/perf/isa207-common.h
>> @@ -222,11 +222,6 @@
>>   CNST_PMC_VAL(1) | CNST_PMC_VAL(2) | CNST_PMC_VAL(3) | \
>>   CNST_PMC_VAL(4) | CNST_PMC_VAL(5) | CNST_PMC_VAL(6) | CNST_NC_VAL
>>  
>> -/*
>> - * Lets restrict use of PMC5 for instruction counting.
>> - */
>> -#define P9_DD1_TEST_ADDER (ISA207_TEST_ADDER | CNST_PMC_VAL(5))
>> -
>>  /* Bits in MMCR1 for PowerISA v2.07 */
>>  #define MMCR1_UNIT_SHIFT(pmc) (60 - (4 * ((pmc) - 1)))
>>  #define MMCR1_COMBINE_SHIFT(pmc) (35 - ((pmc) - 1))
>> diff --git a/arch/powerpc/perf/power9-pmu.c b/arch/powerpc/perf/power9-pmu.c
>> index 24b5b5b..3d055c8 100644
>> --- a/arch/powerpc/perf/power9-pmu.c
>> +++ b/arch/powerpc/perf/power9-pmu.c
>> @@ -183,12 +183,6 @@ static struct attribute_group power9_pmu_events_group = {
>>   .attrs = power9_events_attr,
>>  };
>>  
>> -static const struct attribute_group *power9_isa207_pmu_attr_groups[] = {
>> - &isa207_pmu_format_group,
>> - &power9_pmu_events_group,
>> - NULL,
>> -};
>> -
>>  PMU_FORMAT_ATTR(event, "config:0-51");
>>  PMU_FORMAT_ATTR(pmcxsel, "config:0-7");
>>  PMU_FORMAT_ATTR(mark, "config:8");
>> @@ -231,17 +225,6 @@ static const struct attribute_group *power9_pmu_attr_groups[] = {
>>   NULL,
>>  };
>>  
>> -static int power9_generic_events_dd1[] = {
>> - [PERF_COUNT_HW_CPU_CYCLES] = PM_CYC,
>> - [PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = PM_ICT_NOSLOT_CYC,
>> - [PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = PM_CMPLU_STALL,
>> - [PERF_COUNT_HW_INSTRUCTIONS] = PM_INST_DISP,
>> - [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = PM_BR_CMPL_ALT,
>> - [PERF_COUNT_HW_BRANCH_MISSES] = PM_BR_MPRED_CMPL,
>> - [PERF_COUNT_HW_CACHE_REFERENCES] = PM_LD_REF_L1,
>> - [PERF_COUNT_HW_CACHE_MISSES] = PM_LD_MISS_L1_FIN,
>> -};
>> -
>>  static int power9_generic_events[] = {
>>   [PERF_COUNT_HW_CPU_CYCLES] = PM_CYC,
>>   [PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = PM_ICT_NOSLOT_CYC,
>> @@ -403,25 +386,6 @@ static int power9_cache_events[C(MAX)][C(OP_MAX)][C(RESULT_MAX)] = {
>>  
>>  #undef C
>>  
>> -static struct power_pmu power9_isa207_pmu = {
>> - .name = "POWER9",
>> - .n_counter = MAX_PMU_COUNTERS,
>> - .add_fields = ISA207_ADD_FIELDS,
>> - .test_adder = P9_DD1_TEST_ADDER,
>> - .compute_mmcr = isa207_compute_mmcr,
>> - .config_bhrb = power9_config_bhrb,
>> - .bhrb_filter_map = power9_bhrb_filter_map,
>> - .get_constraint = isa207_get_constraint,
>> - .get_alternatives = power9_get_alternatives,
>> - .disable_pmc = isa207_disable_pmc,
>> - .flags = PPMU_NO_SIAR | PPMU_ARCH_207S,
>> - .n_generic = ARRAY_SIZE(power9_generic_events_dd1),
>> - .generic_events = power9_generic_events_dd1,
>> - .cache_events = &power9_cache_events,
>> - .attr_groups = power9_isa207_pmu_attr_groups,
>> - .bhrb_nr = 32,
>> -};
>> -
>>  static struct power_pmu power9_pmu = {
>>   .name = "POWER9",
>>   .n_counter = MAX_PMU_COUNTERS,
>> @@ -452,23 +416,7 @@ static int __init init_power9_pmu(void)
>>      strcmp(cur_cpu_spec->oprofile_cpu_type, "ppc64/power9"))
>>   return -ENODEV;
>>  
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
>> - /*
>> - * Since PM_INST_CMPL may not provide right counts in all
>> - * sampling scenarios in power9 DD1, instead use PM_INST_DISP.
>> - */
>> - EVENT_VAR(PM_INST_CMPL, _g).id = PM_INST_DISP;
>> - /*
>> - * Power9 DD1 should use PM_BR_CMPL_ALT event code for
>> - * "branches" to provide correct counter value.
>> - */
>> - EVENT_VAR(PM_BR_CMPL, _g).id = PM_BR_CMPL_ALT;
>> - EVENT_VAR(PM_BR_CMPL, _c).id = PM_BR_CMPL_ALT;
>> - rc = register_power_pmu(&power9_isa207_pmu);
>> - } else {
>> - rc = register_power_pmu(&power9_pmu);
>> - }
>> -
>> + rc = register_power_pmu(&power9_pmu);
>>   if (rc)
>>   return rc;
>>  
>> diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
>> index 3776a58..113d647 100644
>> --- a/arch/powerpc/platforms/powernv/idle.c
>> +++ b/arch/powerpc/platforms/powernv/idle.c
>> @@ -177,11 +177,6 @@ static void pnv_alloc_idle_core_states(void)
>>   paca[cpu].core_idle_state_ptr = core_idle_state;
>>   paca[cpu].thread_idle_state = PNV_THREAD_RUNNING;
>>   paca[cpu].thread_mask = 1 << j;
>> - if (!cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - continue;
>> - paca[cpu].thread_sibling_pacas =
>> - kmalloc_node(paca_ptr_array_size,
>> -     GFP_KERNEL, node);
>>   }
>>   }
>>  
>> @@ -813,28 +808,6 @@ static int __init pnv_init_idle_states(void)
>>  
>>   pnv_alloc_idle_core_states();
>>  
>> - /*
>> - * For each CPU, record its PACA address in each of it's
>> - * sibling thread's PACA at the slot corresponding to this
>> - * CPU's index in the core.
>> - */
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
>> - int cpu;
>> -
>> - pr_info("powernv: idle: Saving PACA pointers of all CPUs in their thread sibling PACA\n");
>> - for_each_present_cpu(cpu) {
>> - int base_cpu = cpu_first_thread_sibling(cpu);
>> - int idx = cpu_thread_in_core(cpu);
>> - int i;
>> -
>> - for (i = 0; i < threads_per_core; i++) {
>> - int j = base_cpu + i;
>> -
>> - paca[j].thread_sibling_pacas[idx] = &paca[cpu];
>> - }
>> - }
>> - }
>> -
>>   if (supported_cpuidle_states & OPAL_PM_NAP_ENABLED)
>>   ppc_md.power_save = power7_idle;
>>  
>> diff --git a/arch/powerpc/platforms/powernv/smp.c b/arch/powerpc/platforms/powernv/smp.c
>> index 9664c84..f7dec55 100644
>> --- a/arch/powerpc/platforms/powernv/smp.c
>> +++ b/arch/powerpc/platforms/powernv/smp.c
>> @@ -283,23 +283,6 @@ static void pnv_cause_ipi(int cpu)
>>   ic_cause_ipi(cpu);
>>  }
>>  
>> -static void pnv_p9_dd1_cause_ipi(int cpu)
>> -{
>> - int this_cpu = get_cpu();
>> -
>> - /*
>> - * POWER9 DD1 has a global addressed msgsnd, but for now we restrict
>> - * IPIs to same core, because it requires additional synchronization
>> - * for inter-core doorbells which we do not implement.
>> - */
>> - if (cpumask_test_cpu(cpu, cpu_sibling_mask(this_cpu)))
>> - doorbell_global_ipi(cpu);
>> - else
>> - ic_cause_ipi(cpu);
>> -
>> - put_cpu();
>> -}
>> -
>>  static void __init pnv_smp_probe(void)
>>  {
>>   if (xive_enabled())
>> @@ -311,14 +294,10 @@ static void __init pnv_smp_probe(void)
>>   ic_cause_ipi = smp_ops->cause_ipi;
>>   WARN_ON(!ic_cause_ipi);
>>  
>> - if (cpu_has_feature(CPU_FTR_ARCH_300)) {
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - smp_ops->cause_ipi = pnv_p9_dd1_cause_ipi;
>> - else
>> - smp_ops->cause_ipi = doorbell_global_ipi;
>> - } else {
>> + if (cpu_has_feature(CPU_FTR_ARCH_300))
>> + smp_ops->cause_ipi = doorbell_global_ipi;
>> + else
>>   smp_ops->cause_ipi = pnv_cause_ipi;
>> - }
>>   }
>>  }
>>  
>> diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c
>> index a3b8d7d..82cc999 100644
>> --- a/arch/powerpc/sysdev/xive/common.c
>> +++ b/arch/powerpc/sysdev/xive/common.c
>> @@ -319,7 +319,7 @@ void xive_do_source_eoi(u32 hw_irq, struct xive_irq_data *xd)
>>   * The FW told us to call it. This happens for some
>>   * interrupt sources that need additional HW whacking
>>   * beyond the ESB manipulation. For example LPC interrupts
>> - * on P9 DD1.0 need a latch to be clared in the LPC bridge
>> + * on P9 DD1.0 needed a latch to be clared in the LPC bridge
>>   * itself. The Firmware will take care of it.
>>   */
>>   if (WARN_ON_ONCE(!xive_ops->eoi))
>> @@ -337,9 +337,9 @@ void xive_do_source_eoi(u32 hw_irq, struct xive_irq_data *xd)
>>   * This allows us to then do a re-trigger if Q was set
>>   * rather than synthesizing an interrupt in software
>>   *
>> - * For LSIs, using the HW EOI cycle works around a problem
>> - * on P9 DD1 PHBs where the other ESB accesses don't work
>> - * properly.
>> + * For LSIs the HW EOI cycle is used rather than PQ bits,
>> + * as they are automatically re-triggred in HW when still
>> + * pending.
>>   */
>>   if (xd->flags & XIVE_IRQ_FLAG_LSI)
>>   xive_esb_read(xd, XIVE_ESB_LOAD_EOI);
>> diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
>> index 8a57ff1..c6156b6 100644
>> --- a/drivers/misc/cxl/cxl.h
>> +++ b/drivers/misc/cxl/cxl.h
>> @@ -865,14 +865,6 @@ static inline bool cxl_is_power9(void)
>>   return false;
>>  }
>>  
>> -static inline bool cxl_is_power9_dd1(void)
>> -{
>> - if ((pvr_version_is(PVR_POWER9)) &&
>> -    cpu_has_feature(CPU_FTR_POWER9_DD1))
>> - return true;
>> - return false;
>> -}
>> -
>>  ssize_t cxl_pci_afu_read_err_buffer(struct cxl_afu *afu, char *buf,
>>   loff_t off, size_t count);
>>  
>> diff --git a/drivers/misc/cxl/cxllib.c b/drivers/misc/cxl/cxllib.c
>> index 0bc7c31..5a3f912 100644
>> --- a/drivers/misc/cxl/cxllib.c
>> +++ b/drivers/misc/cxl/cxllib.c
>> @@ -102,10 +102,6 @@ int cxllib_get_xsl_config(struct pci_dev *dev, struct cxllib_xsl_config *cfg)
>>   rc = cxl_get_xsl9_dsnctl(dev, capp_unit_id, &cfg->dsnctl);
>>   if (rc)
>>   return rc;
>> - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {
>> - /* workaround for DD1 - nbwind = capiind */
>> - cfg->dsnctl |= ((u64)0x02 << (63-47));
>> - }
>>  
>>   cfg->version  = CXL_XSL_CONFIG_CURRENT_VERSION;
>>   cfg->log_bar_size = CXL_CAPI_WINDOW_LOG_SIZE;
>> diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
>> index 429d6de..2af0d4c 100644
>> --- a/drivers/misc/cxl/pci.c
>> +++ b/drivers/misc/cxl/pci.c
>> @@ -465,23 +465,21 @@ int cxl_get_xsl9_dsnctl(struct pci_dev *dev, u64 capp_unit_id, u64 *reg)
>>   /* nMMU_ID Defaults to: b’000001001’*/
>>   xsl_dsnctl |= ((u64)0x09 << (63-28));
>>  
>> - if (!(cxl_is_power9_dd1())) {
>> - /*
>> - * Used to identify CAPI packets which should be sorted into
>> - * the Non-Blocking queues by the PHB. This field should match
>> - * the PHB PBL_NBW_CMPM register
>> - * nbwind=0x03, bits [57:58], must include capi indicator.
>> - * Not supported on P9 DD1.
>> - */
>> - xsl_dsnctl |= (nbwind << (63-55));
>> + /*
>> + * Used to identify CAPI packets which should be sorted into
>> + * the Non-Blocking queues by the PHB. This field should match
>> + * the PHB PBL_NBW_CMPM register
>> + * nbwind=0x03, bits [57:58], must include capi indicator.
>> + * Not supported on P9 DD1.
>> + */
>> + xsl_dsnctl |= (nbwind << (63-55));
>>  
>> - /*
>> - * Upper 16b address bits of ASB_Notify messages sent to the
>> - * system. Need to match the PHB’s ASN Compare/Mask Register.
>> - * Not supported on P9 DD1.
>> - */
>> - xsl_dsnctl |= asnind;
>> - }
>> + /*
>> + * Upper 16b address bits of ASB_Notify messages sent to the
>> + * system. Need to match the PHB’s ASN Compare/Mask Register.
>> + * Not supported on P9 DD1.
>> + */
>> + xsl_dsnctl |= asnind;
>>  
>>   *reg = xsl_dsnctl;
>>   return 0;
>> @@ -539,15 +537,8 @@ static int init_implementation_adapter_regs_psl9(struct cxl *adapter,
>>   /* Snoop machines */
>>   cxl_p1_write(adapter, CXL_PSL9_APCDEDALLOC, 0x800F000200000000ULL);
>>  
>> - if (cxl_is_power9_dd1()) {
>> - /* Disabling deadlock counter CAR */
>> - cxl_p1_write(adapter, CXL_PSL9_GP_CT, 0x0020000000000001ULL);
>> - /* Enable NORST */
>> - cxl_p1_write(adapter, CXL_PSL9_DEBUG, 0x8000000000000000ULL);
>> - } else {
>> - /* Enable NORST and DD2 features */
>> - cxl_p1_write(adapter, CXL_PSL9_DEBUG, 0xC000000000000000ULL);
>> - }
>> + /* Enable NORST and DD2 features */
>> + cxl_p1_write(adapter, CXL_PSL9_DEBUG, 0xC000000000000000ULL);
>>  
>>   /*
>>   * Check if PSL has data-cache. We need to flush adapter datacache
>>
>
>
>


--
kernel-team mailing list
[hidden email]
https://lists.ubuntu.com/mailman/listinfo/kernel-team

signature.asc (849 bytes) Download Attachment