1. 08 Jun, 2018 4 commits
  2. 30 May, 2018 2 commits
  3. 29 May, 2018 1 commit
  4. 25 May, 2018 2 commits
  5. 21 May, 2018 1 commit
    • Jens Axboe's avatar
      nvme-pci: fix race between poll and IRQ completions · 68fa9dbe
      Jens Axboe authored
      If polling completions are racing with the IRQ triggered by a
      completion, the IRQ handler will find no work and return IRQ_NONE.
      This can trigger complaints about spurious interrupts:
      
      [  560.169153] irq 630: nobody cared (try booting with the "irqpoll" option)
      [  560.175988] CPU: 40 PID: 0 Comm: swapper/40 Not tainted 4.17.0-rc2+ #65
      [  560.175990] Hardware name: Intel Corporation S2600STB/S2600STB, BIOS SE5C620.86B.00.01.0010.010920180151 01/09/2018
      [  560.175991] Call Trace:
      [  560.175994]  <IRQ>
      [  560.176005]  dump_stack+0x5c/0x7b
      [  560.176010]  __report_bad_irq+0x30/0xc0
      [  560.176013]  note_interrupt+0x235/0x280
      [  560.176020]  handle_irq_event_percpu+0x51/0x70
      [  560.176023]  handle_irq_event+0x27/0x50
      [  560.176026]  handle_edge_irq+0x6d/0x180
      [  560.176031]  handle_irq+0xa5/0x110
      [  560.176036]  do_IRQ+0x41/0xc0
      [  560.176042]  common_interrupt+0xf/0xf
      [  560.176043]  </IRQ>
      [  560.176050] RIP: 0010:cpuidle_enter_state+0x9b/0x2b0
      [  560.176052] RSP: 0018:ffffa0ed4659fe98 EFLAGS: 00000246 ORIG_RAX: ffffffffffffffdd
      [  560.176055] RAX: ffff9527beb20a80 RBX: 000000826caee491 RCX: 000000000000001f
      [  560.176056] RDX: 000000826caee491 RSI: 00000000335206ee RDI: 0000000000000000
      [  560.176057] RBP: 0000000000000001 R08: 00000000ffffffff R09: 0000000000000008
      [  560.176059] R10: ffffa0ed4659fe78 R11: 0000000000000001 R12: ffff9527beb29358
      [  560.176060] R13: ffffffffa235d4b8 R14: 0000000000000000 R15: 000000826caed593
      [  560.176065]  ? cpuidle_enter_state+0x8b/0x2b0
      [  560.176071]  do_idle+0x1f4/0x260
      [  560.176075]  cpu_startup_entry+0x6f/0x80
      [  560.176080]  start_secondary+0x184/0x1d0
      [  560.176085]  secondary_startup_64+0xa5/0xb0
      [  560.176088] handlers:
      [  560.178387] [<00000000efb612be>] nvme_irq [nvme]
      [  560.183019] Disabling IRQ #630
      
      A previous commit removed ->cqe_seen that was handling this case,
      but we need to handle this a bit differently due to completions
      now running outside the queue lock. Return IRQ_HANDLED from the
      IRQ handler, if the completion ring head was moved since we last
      saw it.
      
      Fixes: 5cb525c8 ("nvme-pci: handle completions outside of the queue lock")
      Reported-by: Keith Busch's avatarKeith Busch <[email protected]>
      Reviewed-by: Keith Busch's avatarKeith Busch <[email protected]>
      Tested-by: Keith Busch's avatarKeith Busch <[email protected]>
      Signed-off-by: default avatarJens Axboe <[email protected]>
      68fa9dbe
  6. 18 May, 2018 6 commits
  7. 11 May, 2018 2 commits
  8. 07 May, 2018 1 commit
  9. 02 May, 2018 1 commit
  10. 26 Apr, 2018 2 commits
  11. 12 Apr, 2018 3 commits
  12. 28 Mar, 2018 1 commit
  13. 26 Mar, 2018 3 commits
  14. 01 Mar, 2018 2 commits
    • Ming Lei's avatar
      nvme: pci: pass max vectors as num_possible_cpus() to pci_alloc_irq_vectors · 16ccfff2
      Ming Lei authored
      84676c1f ("genirq/affinity: assign vectors to all possible CPUs")
      has switched to do irq vectors spread among all possible CPUs, so
      pass num_possible_cpus() as max vecotrs to be assigned.
      
      For example, in a 8 cores system, 0~3 online, 4~8 offline/not present,
      see 'lscpu':
      
              [[email protected]]$lscpu
              Architecture:          x86_64
              CPU op-mode(s):        32-bit, 64-bit
              Byte Order:            Little Endian
              CPU(s):                4
              On-line CPU(s) list:   0-3
              Thread(s) per core:    1
              Core(s) per socket:    2
              Socket(s):             2
              NUMA node(s):          2
              ...
              NUMA node0 CPU(s):     0-3
              NUMA node1 CPU(s):
              ...
      
      1) before this patch, follows the allocated vectors and their affinity:
      	irq 47, cpu list 0,4
      	irq 48, cpu list 1,6
      	irq 49, cpu list 2,5
      	irq 50, cpu list 3,7
      
      2) after this patch, follows the allocated vectors and their affinity:
      	irq 43, cpu list 0
      	irq 44, cpu list 1
      	irq 45, cpu list 2
      	irq 46, cpu list 3
      	irq 47, cpu list 4
      	irq 48, cpu list 6
      	irq 49, cpu list 5
      	irq 50, cpu list 7
      
      Cc: Keith Busch <[email protected]>
      Cc: Sagi Grimberg <[email protected]>
      Cc: Thomas Gleixner <[email protected]>
      Cc: Christoph Hellwig <[email protected]>
      Signed-off-by: default avatarMing Lei <[email protected]>
      Reviewed-by: default avatarChristoph Hellwig <[email protected]>
      Signed-off-by: Keith Busch's avatarKeith Busch <[email protected]>
      16ccfff2
    • Wen Xiong's avatar
      nvme-pci: Fix EEH failure on ppc · 651438bb
      Wen Xiong authored
      Triggering PPC EEH detection and handling requires a memory mapped read
      failure. The NVMe driver removed the periodic health check MMIO, so
      there's no early detection mechanism to trigger the recovery. Instead,
      the detection now happens when the nvme driver handles an IO timeout
      event. This takes the pci channel offline, so we do not want the driver
      to proceed with escalating its own recovery efforts that may conflict
      with the EEH handler.
      
      This patch ensures the driver will observe the channel was set to offline
      after a failed MMIO read and resets the IO timer so the EEH handler has
      a chance to recover the device.
      Signed-off-by: default avatarWen Xiong <[email protected]>
      [updated change log]
      Signed-off-by: Keith Busch's avatarKeith Busch <[email protected]>
      651438bb
  15. 26 Feb, 2018 1 commit
  16. 14 Feb, 2018 2 commits
  17. 08 Feb, 2018 1 commit
  18. 26 Jan, 2018 1 commit
  19. 25 Jan, 2018 1 commit
  20. 23 Jan, 2018 1 commit
  21. 17 Jan, 2018 2 commits