1. 30 Aug, 2018 8 commits
    • Paul E. McKenney's avatar
      rcu: Define rcu_all_qs() only in !PREEMPT builds · 395a2f09
      Paul E. McKenney authored
      Now that rcu_all_qs() is used only in !PREEMPT builds, move it to
      tree_plugin.h so that it is defined only in those builds.  This in
      turn means that rcu_momentary_dyntick_idle() is only used in !PREEMPT
      builds, but it is simply marked __maybe_unused in order to keep it
      near the rest of the dyntick-idle code.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      395a2f09
    • Paul E. McKenney's avatar
      rcu: Consolidate RCU-sched update-side function definitions · a8bb74ac
      Paul E. McKenney authored
      This commit saves a few lines by consolidating the RCU-sched function
      definitions at the end of include/linux/rcupdate.h.  This consolidation
      also makes it easier to remove them all when the time comes.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      a8bb74ac
    • Paul E. McKenney's avatar
      rcu: Consolidate RCU-bh update-side function definitions · 4c7e9c14
      Paul E. McKenney authored
      This commit saves a few lines by consolidating the RCU-bh function
      definitions at the end of include/linux/rcupdate.h.  This consolidation
      also makes it easier to remove them all when the time comes.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      4c7e9c14
    • Paul E. McKenney's avatar
      rcu: Express Tiny RCU updates in terms of RCU rather than RCU-sched · 709fdce7
      Paul E. McKenney authored
      This commit renames Tiny RCU functions so that the lowest level of
      functionality is RCU (e.g., synchronize_rcu()) rather than RCU-sched
      (e.g., synchronize_sched()).  This provides greater naming compatibility
      with Tree RCU, which will in turn permit more LoC removal once
      the RCU-sched and RCU-bh update-side API is removed.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      [ paulmck: Fix Tiny call_rcu()'s EXPORT_SYMBOL() in response to a bug
        report from kbuild test robot. ]
      709fdce7
    • Paul E. McKenney's avatar
      rcu: Define RCU-sched API in terms of RCU for Tree RCU PREEMPT builds · 45975c7d
      Paul E. McKenney authored
      Now that RCU-preempt knows about preemption disabling, its implementation
      of synchronize_rcu() works for synchronize_sched(), and likewise for the
      other RCU-sched update-side API members.  This commit therefore confines
      the RCU-sched update-side code to CONFIG_PREEMPT=n builds, and defines
      RCU-sched's update-side API members in terms of those of RCU-preempt.
      
      This means that any given build of the Linux kernel has only one
      update-side flavor of RCU, namely RCU-preempt for CONFIG_PREEMPT=y builds
      and RCU-sched for CONFIG_PREEMPT=n builds.  This in turn means that kernels
      built with CONFIG_RCU_NOCB_CPU=y have only one rcuo kthread per CPU.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      45975c7d
    • Paul E. McKenney's avatar
      rcu: Update comments and help text for no more RCU-bh updaters · 82fcecfa
      Paul E. McKenney authored
      This commit updates comments and help text to account for the fact that
      RCU-bh update-side functions are now simple wrappers for their RCU or
      RCU-sched counterparts.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      82fcecfa
    • Paul E. McKenney's avatar
      rcu: Define RCU-bh update API in terms of RCU · 65cfe358
      Paul E. McKenney authored
      Now that the main RCU API knows about softirq disabling and softirq's
      quiescent states, the RCU-bh update code can be dispensed with.
      This commit therefore removes the RCU-bh update-side implementation and
      defines RCU-bh's update-side API in terms of that of either RCU-preempt or
      RCU-sched, depending on the setting of the CONFIG_PREEMPT Kconfig option.
      
      In kernels built with CONFIG_RCU_NOCB_CPU=y this has the knock-on effect
      of reducing by one the number of rcuo kthreads per CPU.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      65cfe358
    • Paul E. McKenney's avatar
      rcu: Apply RCU-bh QSes to RCU-sched and RCU-preempt when safe · d28139c4
      Paul E. McKenney authored
      One necessary step towards consolidating the three flavors of RCU is to
      make sure that the resulting consolidated "one flavor to rule them all"
      correctly handles networking denial-of-service attacks.  One thing that
      allows RCU-bh to do so is that __do_softirq() invokes rcu_bh_qs() every
      so often, and so something similar has to happen for consolidated RCU.
      
      This must be done carefully.  For example, if a preemption-disabled
      region of code takes an interrupt which does softirq processing before
      returning, consolidated RCU must ignore the resulting rcu_bh_qs()
      invocations -- preemption is still disabled, and that means an RCU
      reader for the consolidated flavor.
      
      This commit therefore creates a new rcu_softirq_qs() that is called only
      from the ksoftirqd task, thus avoiding the interrupted-a-preempted-region
      problem.  This new rcu_softirq_qs() function invokes rcu_sched_qs(),
      rcu_preempt_qs(), and rcu_preempt_deferred_qs().  The latter call handles
      any deferred quiescent states.
      
      Note that __do_softirq() still invokes rcu_bh_qs().  It will continue to
      do so until a later stage of cleanup when the RCU-bh flavor is removed.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      [ paulmck: Fix !SMP issue located by kbuild test robot. ]
      d28139c4
  2. 22 May, 2018 1 commit
  3. 15 May, 2018 1 commit
  4. 27 Nov, 2017 1 commit
  5. 09 Jun, 2017 4 commits
  6. 21 Apr, 2017 1 commit
    • Paul E. McKenney's avatar
      rcu: Make non-preemptive schedule be Tasks RCU quiescent state · bcbfdd01
      Paul E. McKenney authored
      Currently, a call to schedule() acts as a Tasks RCU quiescent state
      only if a context switch actually takes place.  However, just the
      call to schedule() guarantees that the calling task has moved off of
      whatever tracing trampoline that it might have been one previously.
      This commit therefore plumbs schedule()'s "preempt" parameter into
      rcu_note_context_switch(), which then records the Tasks RCU quiescent
      state, but only if this call to schedule() was -not- due to a preemption.
      
      To avoid adding overhead to the common-case context-switch path,
      this commit hides the rcu_note_context_switch() check under an existing
      non-common-case check.
      Suggested-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      bcbfdd01
  7. 15 Jul, 2016 1 commit
  8. 31 Mar, 2016 1 commit
  9. 08 Dec, 2015 1 commit
    • Paul E. McKenney's avatar
      rcu: Don't redundantly disable irqs in rcu_irq_{enter,exit}() · 7c9906ca
      Paul E. McKenney authored
      This commit replaces a local_irq_save()/local_irq_restore() pair with
      a lockdep assertion that interrupts are already disabled.  This should
      remove the corresponding overhead from the interrupt entry/exit fastpaths.
      
      This change was inspired by the fact that Iftekhar Ahmed's mutation
      testing showed that removing rcu_irq_enter()'s call to local_ird_restore()
      had no effect, which might indicate that interrupts were always enabled
      anyway.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      7c9906ca
  10. 04 Dec, 2015 1 commit
    • Paul E. McKenney's avatar
      rcu: Stop disabling interrupts in scheduler fastpaths · 46a5d164
      Paul E. McKenney authored
      We need the scheduler's fastpaths to be, well, fast, and unnecessarily
      disabling and re-enabling interrupts is not necessarily consistent with
      this goal.  Especially given that there are regions of the scheduler that
      already have interrupts disabled.
      
      This commit therefore moves the call to rcu_note_context_switch()
      to one of the interrupts-disabled regions of the scheduler, and
      removes the now-redundant disabling and re-enabling of interrupts from
      rcu_note_context_switch() and the functions it calls.
      Reported-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      [ paulmck: Shift rcu_note_context_switch() to avoid deadlock, as suggested
        by Peter Zijlstra. ]
      46a5d164
  11. 06 Oct, 2015 1 commit
  12. 22 Jul, 2015 1 commit
  13. 27 May, 2015 2 commits
  14. 22 Apr, 2015 1 commit
    • Thomas Gleixner's avatar
      tick: Nohz: Rework next timer evaluation · c1ad348b
      Thomas Gleixner authored
      The evaluation of the next timer in the nohz code is based on jiffies
      while all the tick internals are nano seconds based. We have also to
      convert hrtimer nanoseconds to jiffies in the !highres case. That's
      just wrong and introduces interesting corner cases.
      
      Turn it around and convert the next timer wheel timer expiry and the
      rcu event to clock monotonic and base all calculations on
      nanoseconds. That identifies the case where no timer is pending
      clearly with an absolute expiry value of KTIME_MAX.
      
      Makes the code more readable and gets rid of the jiffies magic in the
      nohz code.
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Link: http://lkml.kernel.org/r/20150414203502.184198593@linutronix.deSigned-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      c1ad348b
  15. 16 Jan, 2015 1 commit
    • Paul E. McKenney's avatar
      rcu: Make cond_resched_rcu_qs() apply to normal RCU flavors · 5cd37193
      Paul E. McKenney authored
      Although cond_resched_rcu_qs() only applies to TASKS_RCU, it is used
      in places where it would be useful for it to apply to the normal RCU
      flavors, rcu_preempt, rcu_sched, and rcu_bh.  This is especially the
      case for workloads that aggressively overload the system, particularly
      those that generate large numbers of RCU updates on systems running
      NO_HZ_FULL CPUs.  This commit therefore communicates quiescent states
      from cond_resched_rcu_qs() to the normal RCU flavors.
      
      Note that it is unfortunately necessary to leave the old ->passed_quiesce
      mechanism in place to allow quiescent states that apply to only one
      flavor to be recorded.  (Yes, we could decrement ->rcu_qs_ctr_snap in
      that case, but that is not so good for debugging of RCU internals.)
      In addition, if one of the RCU flavor's grace period has stalled, this
      will invoke rcu_momentary_dyntick_idle(), resulting in a heavy-weight
      quiescent state visible from other CPUs.
      Reported-by: default avatarSasha Levin <sasha.levin@oracle.com>
      Reported-by: default avatarDave Jones <davej@redhat.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      [ paulmck: Merge commit from Sasha Levin fixing a bug where __this_cpu()
        was used in preemptible code. ]
      5cd37193
  16. 11 Jan, 2015 2 commits
    • Paul E. McKenney's avatar
      rcutorture: Check from beginning to end of grace period · 917963d0
      Paul E. McKenney authored
      Currently, rcutorture's Reader Batch checks measure from the end of
      the previous grace period to the end of the current one.  This commit
      tightens up these checks by measuring from the start and end of the same
      grace period.  This involves adding rcu_batches_started() and friends
      corresponding to the existing rcu_batches_completed() and friends.
      
      We leave SRCU alone for the moment, as it does not yet have a way of
      tracking both ends of its grace periods.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      917963d0
    • Paul E. McKenney's avatar
      rcu: Make _batches_completed() functions return unsigned long · 9733e4f0
      Paul E. McKenney authored
      Long ago, the various ->completed fields were of type long, but now are
      unsigned long due to signed-integer-overflow concerns.  However, the
      various _batches_completed() functions remained of type long, even though
      their only purpose in life is to return the corresponding ->completed
      field.  This patch cleans this up by changing these functions' return
      types to unsigned long.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      9733e4f0
  17. 04 Nov, 2014 2 commits
  18. 14 May, 2014 1 commit
  19. 21 Mar, 2014 1 commit
    • Paul E. McKenney's avatar
      rcu: Provide grace-period piggybacking API · 765a3f4f
      Paul E. McKenney authored
      The following pattern is currently not well supported by RCU:
      
      1.	Make data element inaccessible to RCU readers.
      
      2.	Do work that probably lasts for more than one grace period.
      
      3.	Do something to make sure RCU readers in flight before #1 above
      	have completed.
      
      Here are some things that could currently be done:
      
      a.	Do a synchronize_rcu() unconditionally at either #1 or #3 above.
      	This works, but imposes needless work and latency.
      
      b.	Post an RCU callback at #1 above that does a wakeup, then
      	wait for the wakeup at #3.  This works well, but likely results
      	in an extra unneeded grace period.  Open-coding this is also
      	a bit more semi-tricky code than would be good.
      
      This commit therefore adds get_state_synchronize_rcu() and
      cond_synchronize_rcu() APIs.  Call get_state_synchronize_rcu() at #1
      above and pass its return value to cond_synchronize_rcu() at #3 above.
      This results in a call to synchronize_rcu() if no grace period has
      elapsed between #1 and #3, but requires only a load, comparison, and
      memory barrier if a full grace period did elapse.
      Requested-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
      765a3f4f
  20. 18 Feb, 2014 1 commit
  21. 17 Feb, 2014 1 commit
  22. 12 Dec, 2013 1 commit
  23. 25 Sep, 2013 2 commits
  24. 10 Jun, 2013 2 commits
  25. 07 Jun, 2012 1 commit
    • Paul E. McKenney's avatar
      rcu: Precompute RCU_FAST_NO_HZ timer offsets · aa9b1630
      Paul E. McKenney authored
      When a CPU is entering dyntick-idle mode, tick_nohz_stop_sched_tick()
      calls rcu_needs_cpu() see if RCU needs that CPU, and, if not, computes the
      next wakeup time based on the timer wheels.  Only later, when actually
      entering the idle loop, rcu_prepare_for_idle() will be invoked.  In some
      cases, rcu_prepare_for_idle() will post timers to wake the CPU back up.
      But all for naught: The next wakeup time for the CPU has already been
      computed, and posting a timer afterwards does not force that wakeup
      time to be recomputed.  This means that rcu_prepare_for_idle()'s have
      no effect.
      
      This is not a problem on a busy system because something else will wake
      up the CPU soon enough.  However, on lightly loaded systems, the CPU
      might stay asleep for a considerable length of time.  If that CPU has
      a callback that the rest of the system is waiting on, the system might
      run very slowly or (in theory) even hang.
      
      This commit avoids this problem by having rcu_needs_cpu() give
      tick_nohz_stop_sched_tick() an estimate of when RCU will need the CPU
      to wake back up, which tick_nohz_stop_sched_tick() takes into account
      when programming the CPU's wakeup time.  An alternative approach is
      for rcu_prepare_for_idle() to use hrtimers instead of normal timers,
      but timers are much more efficient than are hrtimers for frequently
      and repeatedly posting and cancelling a given timer, which is exactly
      what RCU_FAST_NO_HZ does.
      Reported-by: default avatarPascal Chapperon <pascal.chapperon@wanadoo.fr>
      Reported-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: default avatarPaul E. McKenney <paul.mckenney@linaro.org>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Tested-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
      Tested-by: default avatarPascal Chapperon <pascal.chapperon@wanadoo.fr>
      aa9b1630