1. 12 Oct, 2018 1 commit
  2. 10 Jul, 2018 4 commits
  3. 21 May, 2018 1 commit
  4. 16 Mar, 2018 2 commits
  5. 12 Mar, 2018 1 commit
  6. 28 Feb, 2018 1 commit
    • Will Deacon's avatar
      arm_pmu: Use disable_irq_nosync when disabling SPI in CPU teardown hook · b08e5fd9
      Will Deacon authored
      Commit 6de3f791 ("arm_pmu: explicitly enable/disable SPIs at hotplug")
      moved all of the arm_pmu IRQ enable/disable calls to the CPU hotplug hooks,
      regardless of whether they are implemented as PPIs or SPIs. This can
      lead to us sleeping from atomic context due to disable_irq blocking:
       | BUG: sleeping function called from invalid context at kernel/irq/manage.c:112
       | in_atomic(): 1, irqs_disabled(): 128, pid: 15, name: migration/1
       | no locks held by migration/1/15.
       | irq event stamp: 192
       | hardirqs last  enabled at (191): [<00000000803c2507>]
       | _raw_spin_unlock_irq+0x2c/0x4c
       | hardirqs last disabled at (192): [<000000007f57ad28>] multi_cpu_stop+0x9c/0x140
       | softirqs last  enabled at (0): [<0000000004ee1b58>]
       | copy_process.isra.77.part.78+0x43c/0x1504
       | softirqs last disabled at (0): [<          (null)>]           (null)
       | CPU: 1 PID: 15 Comm: migration/1 Not tainted 4.16.0-rc3-salvator-x #1651
       | Hardware name: Renesas Salvator-X board based on r8a7796 (DT)
       | Call trace:
       |  dump_backtrace+0x0/0x140
       |  show_stack+0x14/0x1c
       |  dump_stack+0xb4/0xf0
       |  ___might_sleep+0x1fc/0x218
       |  __might_sleep+0x70/0x80
       |  synchronize_irq+0x40/0xa8
       |  disable_irq+0x20/0x2c
       |  arm_perf_teardown_cpu+0x80/0xac
      Since the interrupt is always CPU-affine and this code is running with
      interrupts disabled, we can just use disable_irq_nosync as we know there
      isn't a concurrent invocation of the handler to worry about.
      Fixes: 6de3f791 ("arm_pmu: explicitly enable/disable SPIs at hotplug")
      Reported-by: default avatarGeert Uytterhoeven <[email protected]>
      Tested-by: default avatarGeert Uytterhoeven <[email protected]>
      Acked-by: default avatarMark Rutland <[email protected]>
      Signed-off-by: default avatarWill Deacon <[email protected]>
      Signed-off-by: default avatarCatalin Marinas <[email protected]>
  7. 20 Feb, 2018 7 commits
  8. 24 Oct, 2017 1 commit
  9. 08 Aug, 2017 1 commit
  10. 27 Jul, 2017 1 commit
    • Will Deacon's avatar
      drivers/perf: arm_pmu: Request PMU SPIs with IRQF_PER_CPU · a3287c41
      Will Deacon authored
      Since the PMU register interface is banked per CPU, CPU PMU interrrupts
      cannot be handled by a CPU other than the one with the PMU asserting the
      interrupt. This means that migrating PMU SPIs, as we do during a CPU
      hotplug operation doesn't make any sense and can lead to the IRQ being
      disabled entirely if we route a spurious IRQ to the new affinity target.
      This has been observed in practice on AMD Seattle, where CPUs on the
      non-boot cluster appear to take a spurious PMU IRQ when coming online,
      which is routed to CPU0 where it cannot be handled.
      This patch passes IRQF_PERCPU for PMU SPIs and forcefully sets their
      affinity prior to requesting them, ensuring that they cannot
      be migrated during hotplug events. This interacts badly with the DB8500
      erratum workaround that ping-pongs the interrupt affinity from the handler,
      so we avoid passing IRQF_PERCPU in that case by allowing the IRQ flags
      to be overridden in the platdata.
      Fixes: 3cf7ee98 ("drivers/perf: arm_pmu: move irq request/free into probe")
      Cc: Mark Rutland <[email protected]>
      Cc: Linus Walleij <[email protected]>
      Signed-off-by: default avatarWill Deacon <[email protected]>
  11. 11 Apr, 2017 11 commits
  12. 31 Mar, 2017 3 commits
    • Mark Rutland's avatar
      drivers/perf: arm_pmu: split irq request from enable · c09adab0
      Mark Rutland authored
      For historical reasons, we lazily request and free interrupts in the
      arm pmu driver. This requires us to refcount use of the pmu (by way of
      counting the active events) in order to request/free interrupts at the
      correct times, which complicates the driver somewhat.
      The existing logic is flawed, as it only considers currently online CPUs
      when requesting, freeing, or managing the affinity of interrupts.
      Intervening hotplug events can result in erroneous IRQ affinity, online
      CPUs for which interrupts have not been requested, or offline CPUs whose
      interrupts are still requested.
      To fix this, this patch splits the requesting of interrupts from any
      per-cpu management (i.e. per-cpu enable/disable, and configuration of
      cpu affinity). We now request all interrupts up-front at probe time (and
      never free them, since we never unregister PMUs).
      The management of affinity, and per-cpu enable/disable now happens in
      our cpu hotplug callback, ensuring it occurs consistently. This means
      that we must now invoke the CPU hotplug callback at boot time in order
      to configure IRQs, and since the callback also resets the PMU hardware,
      we can remove the duplicate reset in the probe path.
      This rework renders our event refcounting unnecessary, so this is
      Signed-off-by: default avatarMark Rutland <[email protected]>
      [will: make armpmu_get_cpu_irq static]
      Signed-off-by: default avatarWill Deacon <[email protected]>
    • Mark Rutland's avatar
      drivers/perf: arm_pmu: manage interrupts per-cpu · 7ed98e01
      Mark Rutland authored
      When requesting or freeing interrupts, we use platform_get_irq() to find
      relevant irqs, backing this up with additional information in an
      optional irq_affinity table.
      This means that our irq request and free paths are tied to a
      platform_device, and our request path must jump through a number of
      hoops in order to determine the required affinity of each interrupt.
      Given that the affinity must be static, we can compute the affinity once
      up-front at probe time, simplifying the irq request and free paths. By
      recording interrupts in a per-cpu data structure, we simplify a few
      paths, and permit a subsequent rework of the request and free paths.
      Signed-off-by: default avatarMark Rutland <[email protected]>
      [will: rename local nr_irqs variable to avoid conflict with global]
      Signed-off-by: default avatarWill Deacon <[email protected]>
    • Mark Rutland's avatar
      drivers/perf: arm_pmu: rework per-cpu allocation · 2681f018
      Mark Rutland authored
      For historical reasons, we allocate per-cpu data associated with a PMU
      rather late, in cpu_pmu_init, after we've parsed whatever hardware
      information we were provided with.
      In order to allow use to store some per-cpu data early in the probe
      path, we need to allocate (and initialise) the per-cpu data earlier.
      This patch reworks the way we allocate the pmu and associated per-cpu
      data in order to make that possible.
      Signed-off-by: default avatarMark Rutland <[email protected]>
      [will: make armpmu_{alloc,free} static
      Signed-off-by: default avatarWill Deacon <[email protected]>
  13. 02 Mar, 2017 1 commit
  14. 25 Dec, 2016 1 commit
  15. 16 Sep, 2016 1 commit
  16. 09 Sep, 2016 3 commits