1. 09 Oct, 2018 1 commit
    • Waiman Long's avatar
      locking/lockdep: Make class->ops a percpu counter and move it under CONFIG_DEBUG_LOCKDEP=y · 8ca2b56c
      Waiman Long authored
      A sizable portion of the CPU cycles spent on the __lock_acquire() is used
      up by the atomic increment of the class->ops stat counter. By taking it out
      from the lock_class structure and changing it to a per-cpu per-lock-class
      counter, we can reduce the amount of cacheline contention on the class
      structure when multiple CPUs are trying to acquire locks of the same
      class simultaneously.
      
      To limit the increase in memory consumption because of the percpu nature
      of that counter, it is now put back under the CONFIG_DEBUG_LOCKDEP
      config option. So the memory consumption increase will only occur if
      CONFIG_DEBUG_LOCKDEP is defined. The lock_class structure, however,
      is reduced in size by 16 bytes on 64-bit archs after ops removal and
      a minor restructuring of the fields.
      
      This patch also fixes a bug in the increment code as the counter is of
      the 'unsigned long' type, but atomic_inc() was used to increment it.
      Signed-off-by: Waiman Long's avatarWaiman Long <longman@redhat.com>
      Acked-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Link: http://lkml.kernel.org/r/d66681f3-8781-9793-1dcf-2436a284550b@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      8ca2b56c
  2. 10 Aug, 2018 1 commit
  3. 31 Jul, 2018 1 commit
    • Joel Fernandes (Google)'s avatar
      tracing: Centralize preemptirq tracepoints and unify their usage · c3bc8fd6
      Joel Fernandes (Google) authored
      This patch detaches the preemptirq tracepoints from the tracers and
      keeps it separate.
      
      Advantages:
      * Lockdep and irqsoff event can now run in parallel since they no longer
      have their own calls.
      
      * This unifies the usecase of adding hooks to an irqsoff and irqson
      event, and a preemptoff and preempton event.
        3 users of the events exist:
        - Lockdep
        - irqsoff and preemptoff tracers
        - irqs and preempt trace events
      
      The unification cleans up several ifdefs and makes the code in preempt
      tracer and irqsoff tracers simpler. It gets rid of all the horrific
      ifdeferry around PROVE_LOCKING and makes configuration of the different
      users of the tracepoints more easy and understandable. It also gets rid
      of the time_* function calls from the lockdep hooks used to call into
      the preemptirq tracer which is not needed anymore. The negative delta in
      lines of code in this patch is quite large too.
      
      In the patch we introduce a new CONFIG option PREEMPTIRQ_TRACEPOINTS
      as a single point for registering probes onto the tracepoints. With
      this,
      the web of config options for preempt/irq toggle tracepoints and its
      users becomes:
      
       PREEMPT_TRACER   PREEMPTIRQ_EVENTS  IRQSOFF_TRACER PROVE_LOCKING
             |                 |     \         |           |
             \    (selects)    /      \        \ (selects) /
            TRACE_PREEMPT_TOGGLE       ----> TRACE_IRQFLAGS
                            \                  /
                             \ (depends on)   /
                           PREEMPTIRQ_TRACEPOINTS
      
      Other than the performance tests mentioned in the previous patch, I also
      ran the locking API test suite. I verified that all tests cases are
      passing.
      
      I also injected issues by not registering lockdep probes onto the
      tracepoints and I see failures to confirm that the probes are indeed
      working.
      
      This series + lockdep probes not registered (just to inject errors):
      [    0.000000]      hard-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
      [    0.000000]      soft-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
      [    0.000000]        sirq-safe-A => hirqs-on/12:FAILED|FAILED|  ok  |
      [    0.000000]        sirq-safe-A => hirqs-on/21:FAILED|FAILED|  ok  |
      [    0.000000]          hard-safe-A + irqs-on/12:FAILED|FAILED|  ok  |
      [    0.000000]          soft-safe-A + irqs-on/12:FAILED|FAILED|  ok  |
      [    0.000000]          hard-safe-A + irqs-on/21:FAILED|FAILED|  ok  |
      [    0.000000]          soft-safe-A + irqs-on/21:FAILED|FAILED|  ok  |
      [    0.000000]     hard-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
      [    0.000000]     soft-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
      
      With this series + lockdep probes registered, all locking tests pass:
      
      [    0.000000]      hard-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
      [    0.000000]      soft-irqs-on + irq-safe-A/21:  ok  |  ok  |  ok  |
      [    0.000000]        sirq-safe-A => hirqs-on/12:  ok  |  ok  |  ok  |
      [    0.000000]        sirq-safe-A => hirqs-on/21:  ok  |  ok  |  ok  |
      [    0.000000]          hard-safe-A + irqs-on/12:  ok  |  ok  |  ok  |
      [    0.000000]          soft-safe-A + irqs-on/12:  ok  |  ok  |  ok  |
      [    0.000000]          hard-safe-A + irqs-on/21:  ok  |  ok  |  ok  |
      [    0.000000]          soft-safe-A + irqs-on/21:  ok  |  ok  |  ok  |
      [    0.000000]     hard-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
      [    0.000000]     soft-safe-A + unsafe-B #1/123:  ok  |  ok  |  ok  |
      
      Link: http://lkml.kernel.org/r/20180730222423.196630-4-joel@joelfernandes.orgAcked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarNamhyung Kim <namhyung@kernel.org>
      Signed-off-by: default avatarJoel Fernandes (Google) <joel@joelfernandes.org>
      Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      c3bc8fd6
  4. 18 Jan, 2018 1 commit
  5. 17 Jan, 2018 1 commit
  6. 08 Jan, 2018 1 commit
    • Ingo Molnar's avatar
      locking/lockdep: Remove cross-release leftovers · 527187d2
      Ingo Molnar authored
      There's two cross-release leftover facilities:
      
       - the crossrelease_hist_*() irq-tracing callbacks (NOPs currently)
       - the complete_release_commit() callback (NOP as well)
      
      Remove them.
      
      Cc: David Sterba <dsterba@suse.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      527187d2
  7. 12 Dec, 2017 1 commit
    • Ingo Molnar's avatar
      locking/lockdep: Remove the cross-release locking checks · e966eaee
      Ingo Molnar authored
      This code (CONFIG_LOCKDEP_CROSSRELEASE=y and CONFIG_LOCKDEP_COMPLETIONS=y),
      while it found a number of old bugs initially, was also causing too many
      false positives that caused people to disable lockdep - which is arguably
      a worse overall outcome.
      
      If we disable cross-release by default but keep the code upstream then
      in practice the most likely outcome is that we'll allow the situation
      to degrade gradually, by allowing entropy to introduce more and more
      false positives, until it overwhelms maintenance capacity.
      
      Another bad side effect was that people were trying to work around
      the false positives by uglifying/complicating unrelated code. There's
      a marked difference between annotating locking operations and
      uglifying good code just due to bad lock debugging code ...
      
      This gradual decrease in quality happened to a number of debugging
      facilities in the kernel, and lockdep is pretty complex already,
      so we cannot risk this outcome.
      
      Either cross-release checking can be done right with no false positives,
      or it should not be included in the upstream kernel.
      
      ( Note that it might make sense to maintain it out of tree and go through
        the false positives every now and then and see whether new bugs were
        introduced. )
      
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      e966eaee
  8. 08 Nov, 2017 1 commit
  9. 02 Nov, 2017 1 commit
    • Greg Kroah-Hartman's avatar
      License cleanup: add SPDX GPL-2.0 license identifier to files with no license · b2441318
      Greg Kroah-Hartman authored
      Many source files in the tree are missing licensing information, which
      makes it harder for compliance tools to determine the correct license.
      
      By default all files without license information are under the default
      license of the kernel, which is GPL version 2.
      
      Update the files which contain no license information with the 'GPL-2.0'
      SPDX license identifier.  The SPDX identifier is a legally binding
      shorthand, which can be used instead of the full boiler plate text.
      
      This patch is based on work done by Thomas Gleixner and Kate Stewart and
      Philippe Ombredanne.
      
      How this work was done:
      
      Patches were generated and checked against linux-4.14-rc6 for a subset of
      the use cases:
       - file had no licensing information it it.
       - file was a */uapi/* one with no licensing information in it,
       - file was a */uapi/* one with existing licensing information,
      
      Further patches will be generated in subsequent months to fix up cases
      where non-standard license headers were used, and references to license
      had to be inferred by heuristics based on keywords.
      
      The analysis to determine which SPDX License Identifier to be applied to
      a file was done in a spreadsheet of side by side results from of the
      output of two independent scanners (ScanCode & Windriver) producing SPDX
      tag:value files created by Philippe Ombredanne.  Philippe prepared the
      base worksheet, and did an initial spot review of a few 1000 files.
      
      The 4.13 kernel was the starting point of the analysis with 60,537 files
      assessed.  Kate Stewart did a file by file comparison of the scanner
      results in the spreadsheet to determine which SPDX license identifier(s)
      to be applied to the file. She confirmed any determination that was not
      immediately clear with lawyers working with the Linux Foundation.
      
      Criteria used to select files for SPDX license identifier tagging was:
       - Files considered eligible had to be source code files.
       - Make and config files were included as candidates if they contained >5
         lines of source
       - File already had some variant of a license header in it (even if <5
         lines).
      
      All documentation files were explicitly excluded.
      
      The following heuristics were used to determine which SPDX license
      identifiers to apply.
      
       - when both scanners couldn't find any license traces, file was
         considered to have no license information in it, and the top level
         COPYING file license applied.
      
         For non */uapi/* files that summary was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0                                              11139
      
         and resulted in the first patch in this series.
      
         If that file was a */uapi/* path one, it was "GPL-2.0 WITH
         Linux-syscall-note" otherwise it was "GPL-2.0".  Results of that was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0 WITH Linux-syscall-note                        930
      
         and resulted in the second patch in this series.
      
       - if a file had some form of licensing information in it, and was one
         of the */uapi/* ones, it was denoted with the Linux-syscall-note if
         any GPL family license was found in the file or had no licensing in
         it (per prior point).  Results summary:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|------
         GPL-2.0 WITH Linux-syscall-note                       270
         GPL-2.0+ WITH Linux-syscall-note                      169
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause)    21
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)    17
         LGPL-2.1+ WITH Linux-syscall-note                      15
         GPL-1.0+ WITH Linux-syscall-note                       14
         ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause)    5
         LGPL-2.0+ WITH Linux-syscall-note                       4
         LGPL-2.1 WITH Linux-syscall-note                        3
         ((GPL-2.0 WITH Linux-syscall-note) OR MIT)              3
         ((GPL-2.0 WITH Linux-syscall-note) AND MIT)             1
      
         and that resulted in the third patch in this series.
      
       - when the two scanners agreed on the detected license(s), that became
         the concluded license(s).
      
       - when there was disagreement between the two scanners (one detected a
         license but the other didn't, or they both detected different
         licenses) a manual inspection of the file occurred.
      
       - In most cases a manual inspection of the information in the file
         resulted in a clear resolution of the license that should apply (and
         which scanner probably needed to revisit its heuristics).
      
       - When it was not immediately clear, the license identifier was
         confirmed with lawyers working with the Linux Foundation.
      
       - If there was any question as to the appropriate license identifier,
         the file was flagged for further research and to be revisited later
         in time.
      
      In total, over 70 hours of logged manual review was done on the
      spreadsheet to determine the SPDX license identifiers to apply to the
      source files by Kate, Philippe, Thomas and, in some cases, confirmation
      by lawyers working with the Linux Foundation.
      
      Kate also obtained a third independent scan of the 4.13 code base from
      FOSSology, and compared selected files where the other two scanners
      disagreed against that SPDX file, to see if there was new insights.  The
      Windriver scanner is based on an older version of FOSSology in part, so
      they are related.
      
      Thomas did random spot checks in about 500 files from the spreadsheets
      for the uapi headers and agreed with SPDX license identifier in the
      files he inspected. For the non-uapi files Thomas did random spot checks
      in about 15000 files.
      
      In initial set of patches against 4.14-rc6, 3 files were found to have
      copy/paste license identifier errors, and have been fixed to reflect the
      correct identifier.
      
      Additionally Philippe spent 10 hours this week doing a detailed manual
      inspection and review of the 12,461 patched files from the initial patch
      version early this week with:
       - a full scancode scan run, collecting the matched texts, detected
         license ids and scores
       - reviewing anything where there was a license detected (about 500+
         files) to ensure that the applied SPDX license was correct
       - reviewing anything where there was no detection but the patch license
         was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
         SPDX license was correct
      
      This produced a worksheet with 20 files needing minor correction.  This
      worksheet was then exported into 3 different .csv files for the
      different types of files to be modified.
      
      These .csv files were then reviewed by Greg.  Thomas wrote a script to
      parse the csv files and add the proper SPDX tag to the file, in the
      format that the file expected.  This script was further refined by Greg
      based on the output to detect more types of files automatically and to
      distinguish between header and source .c files (which need different
      comment types.)  Finally Greg ran the script using the .csv files to
      generate the patches.
      Reviewed-by: default avatarKate Stewart <kstewart@linuxfoundation.org>
      Reviewed-by: default avatarPhilippe Ombredanne <pombredanne@nexb.com>
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b2441318
  10. 25 Oct, 2017 1 commit
  11. 29 Aug, 2017 1 commit
  12. 25 Aug, 2017 1 commit
    • Peter Zijlstra's avatar
      locking/lockdep: Fix workqueue crossrelease annotation · e6f3faa7
      Peter Zijlstra authored
      The new completion/crossrelease annotations interact unfavourable with
      the extant flush_work()/flush_workqueue() annotations.
      
      The problem is that when a single work class does:
      
        wait_for_completion(&C)
      
      and
      
        complete(&C)
      
      in different executions, we'll build dependencies like:
      
        lock_map_acquire(W)
        complete_acquire(C)
      
      and
      
        lock_map_acquire(W)
        complete_release(C)
      
      which results in the dependency chain: W->C->W, which lockdep thinks
      spells deadlock, even though there is no deadlock potential since
      works are ran concurrently.
      
      One possibility would be to change the work 'lock' to recursive-read,
      but that would mean hitting a lockdep limitation on recursive locks.
      Also, unconditinoally switching to recursive-read here would fail to
      detect the actual deadlock on single-threaded workqueues, which do
      have a problem with this.
      
      For now, forcefully disregard these locks for crossrelease.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: boqun.feng@gmail.com
      Cc: byungchul.park@lge.com
      Cc: david@fromorbit.com
      Cc: johannes@sipsolutions.net
      Cc: oleg@redhat.com
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      e6f3faa7
  13. 17 Aug, 2017 1 commit
    • Boqun Feng's avatar
      locking/lockdep: Explicitly initialize wq_barrier::done::map · 52fa5bc5
      Boqun Feng authored
      With the new lockdep crossrelease feature, which checks completions usage,
      a false positive is reported in the workqueue code:
      
      > Worker A : acquired of wfc.work -> wait for cpu_hotplug_lock to be released
      > Task   B : acquired of cpu_hotplug_lock -> wait for lock#3 to be released
      > Task   C : acquired of lock#3 -> wait for completion of barr->done
      > (Task C is in lru_add_drain_all_cpuslocked())
      > Worker D : wait for wfc.work to be released -> will complete barr->done
      
      Such a dead lock can not happen because Task C's barr->done and Worker D's
      barr->done can not be the same instance.
      
      The reason of this false positive is we initialize all wq_barrier::done
      at insert_wq_barrier() via init_completion(), which makes them belong to
      the same lock class, therefore, impossible circles are reported.
      
      To fix this, explicitly initialize the lockdep map for wq_barrier::done
      in insert_wq_barrier(), so that the lock class key of wq_barrier::done
      is a subkey of the corresponding work_struct, as a result we won't build
      a dependency between a wq_barrier with a unrelated work, and we can
      differ wq barriers based on the related works, so the false positive
      above is avoided.
      
      Also define the empty lockdep_init_map_crosslock() for !CROSSRELEASE
      to make the code simple and away from unnecessary #ifdefs.
      Reported-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarBoqun Feng <boqun.feng@gmail.com>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Lai Jiangshan <jiangshanlai@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170817094622.12915-1-boqun.feng@gmail.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      52fa5bc5
  14. 10 Aug, 2017 4 commits
    • Byungchul Park's avatar
      locking/lockdep: Handle non(or multi)-acquisition of a crosslock · 28a903f6
      Byungchul Park authored
      No acquisition might be in progress on commit of a crosslock. Completion
      operations enabling crossrelease are the case like:
      
         CONTEXT X                         CONTEXT Y
         ---------                         ---------
         trigger completion context
                                           complete AX
                                              commit AX
         wait_for_complete AX
            acquire AX
            wait
      
         where AX is a crosslock.
      
      When no acquisition is in progress, we should not perform commit because
      the lock does not exist, which might cause incorrect memory access. So
      we have to track the number of acquisitions of a crosslock to handle it.
      
      Moreover, in case that more than one acquisition of a crosslock are
      overlapped like:
      
         CONTEXT W        CONTEXT X        CONTEXT Y        CONTEXT Z
         ---------        ---------        ---------        ---------
         acquire AX (gen_id: 1)
                                           acquire A
                          acquire AX (gen_id: 10)
                                           acquire B
                                           commit AX
                                                            acquire C
                                                            commit AX
      
         where A, B and C are typical locks and AX is a crosslock.
      
      Current crossrelease code performs commits in Y and Z with gen_id = 10.
      However, we can use gen_id = 1 to do it, since not only 'acquire AX in X'
      but 'acquire AX in W' also depends on each acquisition in Y and Z until
      their commits. So make it use gen_id = 1 instead of 10 on their commits,
      which adds an additional dependency 'AX -> A' in the example above.
      Signed-off-by: default avatarByungchul Park <byungchul.park@lge.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: akpm@linux-foundation.org
      Cc: boqun.feng@gmail.com
      Cc: kernel-team@lge.com
      Cc: kirill@shutemov.name
      Cc: npiggin@gmail.com
      Cc: walken@google.com
      Cc: willy@infradead.org
      Link: http://lkml.kernel.org/r/1502089981-21272-8-git-send-email-byungchul.park@lge.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      28a903f6
    • Byungchul Park's avatar
      locking/lockdep: Detect and handle hist_lock ring buffer overwrite · 23f873d8
      Byungchul Park authored
      The ring buffer can be overwritten by hardirq/softirq/work contexts.
      That cases must be considered on rollback or commit. For example,
      
                |<------ hist_lock ring buffer size ----->|
                ppppppppppppiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii
      wrapped > iiiiiiiiiiiiiiiiiiiiiii....................
      
                where 'p' represents an acquisition in process context,
                'i' represents an acquisition in irq context.
      
      On irq exit, crossrelease tries to rollback idx to original position,
      but it should not because the entry already has been invalid by
      overwriting 'i'. Avoid rollback or commit for entries overwritten.
      Signed-off-by: default avatarByungchul Park <byungchul.park@lge.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: akpm@linux-foundation.org
      Cc: boqun.feng@gmail.com
      Cc: kernel-team@lge.com
      Cc: kirill@shutemov.name
      Cc: npiggin@gmail.com
      Cc: walken@google.com
      Cc: willy@infradead.org
      Link: http://lkml.kernel.org/r/1502089981-21272-7-git-send-email-byungchul.park@lge.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      23f873d8
    • Byungchul Park's avatar
      locking/lockdep: Implement the 'crossrelease' feature · b09be676
      Byungchul Park authored
      Lockdep is a runtime locking correctness validator that detects and
      reports a deadlock or its possibility by checking dependencies between
      locks. It's useful since it does not report just an actual deadlock but
      also the possibility of a deadlock that has not actually happened yet.
      That enables problems to be fixed before they affect real systems.
      
      However, this facility is only applicable to typical locks, such as
      spinlocks and mutexes, which are normally released within the context in
      which they were acquired. However, synchronization primitives like page
      locks or completions, which are allowed to be released in any context,
      also create dependencies and can cause a deadlock.
      
      So lockdep should track these locks to do a better job. The 'crossrelease'
      implementation makes these primitives also be tracked.
      Signed-off-by: default avatarByungchul Park <byungchul.park@lge.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: akpm@linux-foundation.org
      Cc: boqun.feng@gmail.com
      Cc: kernel-team@lge.com
      Cc: kirill@shutemov.name
      Cc: npiggin@gmail.com
      Cc: walken@google.com
      Cc: willy@infradead.org
      Link: http://lkml.kernel.org/r/1502089981-21272-6-git-send-email-byungchul.park@lge.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      b09be676
    • Peter Zijlstra's avatar
      locking/lockdep: Rework FS_RECLAIM annotation · d92a8cfc
      Peter Zijlstra authored
      A while ago someone, and I cannot find the email just now, asked if we
      could not implement the RECLAIM_FS inversion stuff with a 'fake' lock
      like we use for other things like workqueues etc. I think this should
      be possible which allows reducing the 'irq' states and will reduce the
      amount of __bfs() lookups we do.
      
      Removing the 1 IRQ state results in 4 less __bfs() walks per
      dependency, improving lockdep performance. And by moving this
      annotation out of the lockdep code it becomes easier for the mm people
      to extend.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Byungchul Park <byungchul.park@lge.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Nikolay Borisov <nborisov@suse.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: akpm@linux-foundation.org
      Cc: boqun.feng@gmail.com
      Cc: iamjoonsoo.kim@lge.com
      Cc: kernel-team@lge.com
      Cc: kirill@shutemov.name
      Cc: npiggin@gmail.com
      Cc: walken@google.com
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      d92a8cfc
  15. 16 Mar, 2017 1 commit
  16. 30 Nov, 2016 1 commit
  17. 24 Sep, 2016 1 commit
  18. 05 May, 2016 1 commit
    • Peter Zijlstra's avatar
      locking/lockdep, sched/core: Implement a better lock pinning scheme · e7904a28
      Peter Zijlstra authored
      The problem with the existing lock pinning is that each pin is of
      value 1; this mean you can simply unpin if you know its pinned,
      without having any extra information.
      
      This scheme generates a random (16 bit) cookie for each pin and
      requires this same cookie to unpin. This means you have to keep the
      cookie in context.
      
      No objsize difference for !LOCKDEP kernels.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      e7904a28
  19. 23 Apr, 2016 1 commit
    • Peter Zijlstra's avatar
      lockdep: Fix lock_chain::base size · 75dd602a
      Peter Zijlstra authored
      lock_chain::base is used to store an index into the chain_hlocks[]
      array, however that array contains more elements than can be indexed
      using the u16.
      
      Change the lock_chain structure to use a bitfield to encode the data
      it needs and add BUILD_BUG_ON() assertions to check the fields are
      wide enough.
      
      Also, for DEBUG_LOCKDEP, assert that we don't run out of elements of
      that array; as that would wreck the collision detectoring.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alfredo Alvarez Fernandez <alfredoalvarezfernandez@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sedat Dilek <sedat.dilek@gmail.com>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20160330093659.GS3408@twins.programming.kicks-ass.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      75dd602a
  20. 22 Apr, 2016 1 commit
    • Michal Hocko's avatar
      locking/rwsem: Provide down_write_killable() · 916633a4
      Michal Hocko authored
      Now that all the architectures implement the necessary glue code
      we can introduce down_write_killable(). The only difference wrt. regular
      down_write() is that the slow path waits in TASK_KILLABLE state and the
      interruption by the fatal signal is reported as -EINTR to the caller.
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
      Cc: Signed-off-by: Jason Low <jason.low2@hp.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: linux-alpha@vger.kernel.org
      Cc: linux-arch@vger.kernel.org
      Cc: linux-ia64@vger.kernel.org
      Cc: linux-s390@vger.kernel.org
      Cc: linux-sh@vger.kernel.org
      Cc: linux-xtensa@linux-xtensa.org
      Cc: sparclinux@vger.kernel.org
      Link: http://lkml.kernel.org/r/1460041951-22347-12-git-send-email-mhocko@kernel.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      916633a4
  21. 12 Feb, 2016 1 commit
    • Andrew Morton's avatar
      kernel/locking/lockdep.c: convert hash tables to hlists · 4a389810
      Andrew Morton authored
      Mike said:
      
      : CONFIG_UBSAN_ALIGNMENT breaks x86-64 kernel with lockdep enabled, i.  e
      : kernel with CONFIG_UBSAN_ALIGNMENT fails to load without even any error
      : message.
      :
      : The problem is that ubsan callbacks use spinlocks and might be called
      : before lockdep is initialized.  Particularly this line in the
      : reserve_ebda_region function causes problem:
      :
      : lowmem = *(unsigned short *)__va(BIOS_LOWMEM_KILOBYTES);
      :
      : If i put lockdep_init() before reserve_ebda_region call in
      : x86_64_start_reservations kernel loads well.
      
      Fix this ordering issue permanently: change lockdep so that it uses
      hlists for the hash tables.  Unlike a list_head, an hlist_head is in its
      initialized state when it is all-zeroes, so lockdep is ready for
      operation immediately upon boot - lockdep_init() need not have run.
      
      The patch will also save some memory.
      
      lockdep_init() and lockdep_initialized can be done away with now - a 4.6
      patch has been prepared to do this.
      Reported-by: default avatarMike Krinkin <krinkin.m.u@gmail.com>
      Suggested-by: default avatarMike Krinkin <krinkin.m.u@gmail.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4a389810
  22. 09 Feb, 2016 2 commits
    • Andrey Ryabinin's avatar
      locking/lockdep: Eliminate lockdep_init() · 06bea3db
      Andrey Ryabinin authored
      Lockdep is initialized at compile time now.  Get rid of lockdep_init().
      Signed-off-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Krinkin <krinkin.m.u@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Cc: mm-commits@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      06bea3db
    • Andrew Morton's avatar
      locking/lockdep: Convert hash tables to hlists · a63f38cc
      Andrew Morton authored
      Mike said:
      
      : CONFIG_UBSAN_ALIGNMENT breaks x86-64 kernel with lockdep enabled, i.e.
      : kernel with CONFIG_UBSAN_ALIGNMENT=y fails to load without even any error
      : message.
      :
      : The problem is that ubsan callbacks use spinlocks and might be called
      : before lockdep is initialized.  Particularly this line in the
      : reserve_ebda_region function causes problem:
      :
      : lowmem = *(unsigned short *)__va(BIOS_LOWMEM_KILOBYTES);
      :
      : If i put lockdep_init() before reserve_ebda_region call in
      : x86_64_start_reservations kernel loads well.
      
      Fix this ordering issue permanently: change lockdep so that it uses hlists
      for the hash tables.  Unlike a list_head, an hlist_head is in its
      initialized state when it is all-zeroes, so lockdep is ready for operation
      immediately upon boot - lockdep_init() need not have run.
      
      The patch will also save some memory.
      
      Probably lockdep_init() and lockdep_initialized can be done away with now.
      Suggested-by: default avatarMike Krinkin <krinkin.m.u@gmail.com>
      Reported-by: default avatarMike Krinkin <krinkin.m.u@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Cc: mm-commits@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      a63f38cc
  23. 23 Nov, 2015 1 commit
    • Peter Zijlstra's avatar
      treewide: Remove old email address · 90eec103
      Peter Zijlstra authored
      There were still a number of references to my old Red Hat email
      address in the kernel source. Remove these while keeping the
      Red Hat copyright notices intact.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      90eec103
  24. 19 Jun, 2015 1 commit
    • George Beshers's avatar
      locking/lockdep: Remove hard coded array size dependency · 68722101
      George Beshers authored
      An apparent oversight left a hardcoded '4' in place when
      LOCKSTAT_POINTS was introduced.
      
      The contention_point[] and contending_point[] arrays in the
      structs lock_class and lock_class_stats need to be the same
      size for the loops in lock_stats() to be correct.
      
      This patch allows LOCKSTAT_POINTS to be changed without
      affecting the correctness of the code.
      Signed-off-by: default avatarGeorge Beshers <gbeshers@sgi.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      68722101
  25. 18 Jun, 2015 1 commit
    • Peter Zijlstra's avatar
      lockdep: Implement lock pinning · a24fc60d
      Peter Zijlstra authored
      Add a lockdep annotation that WARNs if you 'accidentially' unlock a
      lock.
      
      This is especially helpful for code with callbacks, where the upper
      layer assumes a lock remains taken but a lower layer thinks it maybe
      can drop and reacquire the lock.
      
      By unwittingly breaking up the lock, races can be introduced.
      
      Lock pinning is a lockdep annotation that helps with this, when you
      lockdep_pin_lock() a held lock, any unlock without a
      lockdep_unpin_lock() will produce a WARN. Think of this as a relative
      of lockdep_assert_held(), except you don't only assert its held now,
      but ensure it stays held until you release your assertion.
      
      RFC: a possible alternative API would be something like:
      
        int cookie = lockdep_pin_lock(&foo);
        ...
        lockdep_unpin_lock(&foo, cookie);
      
      Where we pick a random number for the pin_count; this makes it
      impossible to sneak a lock break in without also passing the right
      cookie along.
      
      I've not done this because it ends up generating code for !LOCKDEP,
      esp. if you need to pass the cookie around for some reason.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: ktkhai@parallels.com
      Cc: rostedt@goodmis.org
      Cc: juri.lelli@gmail.com
      Cc: pang.xunlei@linaro.org
      Cc: oleg@redhat.com
      Cc: wanpeng.li@linux.intel.com
      Cc: umgwanakikbuti@gmail.com
      Link: http://lkml.kernel.org/r/20150611124743.906731065@infradead.orgSigned-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      a24fc60d
  26. 03 Mar, 2015 1 commit
    • Paul E. McKenney's avatar
      rcu: Improve diagnostics for blocked critical sections in irq · d24209bb
      Paul E. McKenney authored
      If an RCU read-side critical section occurs within an interrupt handler
      or a softirq handler, it cannot have been preempted.  Therefore, there is
      a check in rcu_read_unlock_special() checking for this error.  However,
      when this check triggers, it lacks diagnostic information.  This commit
      therefore moves rcu_read_unlock()'s lockdep annotation to follow the
      call to __rcu_read_unlock() and changes rcu_read_unlock_special()'s
      WARN_ON_ONCE() to an lockdep_rcu_suspicious() in order to locate where
      the offending RCU read-side critical section began.  In addition, the
      value of the ->rcu_read_unlock_special field is printed.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      d24209bb
  27. 03 Oct, 2014 1 commit
    • Peter Zijlstra's avatar
      locking/lockdep: Revert qrwlock recusive stuff · 8acd91e8
      Peter Zijlstra authored
      Commit f0bab73c ("locking/lockdep: Restrict the use of recursive
      read_lock() with qrwlock") changed lockdep to try and conform to the
      qrwlock semantics which differ from the traditional rwlock semantics.
      
      In particular qrwlock is fair outside of interrupt context, but in
      interrupt context readers will ignore all fairness.
      
      The problem modeling this is that read and write side have different
      lock state (interrupts) semantics but we only have a single
      representation of these. Therefore lockdep will get confused, thinking
      the lock can cause interrupt lock inversions.
      
      So revert it for now; the old rwlock semantics were already imperfectly
      modeled and the qrwlock extra won't fit either.
      
      If we want to properly fix this, I think we need to resurrect the work
      by Gautham did a few years ago that split the read and write state of
      locks:
      
         http://lwn.net/Articles/332801/
      
      FWIW the locking selftest that would've failed (and was reported by
      Borislav earlier) is something like:
      
        RL(X1);	/* IRQ-ON */
        LOCK(A);
        UNLOCK(A);
        RU(X1);
      
        IRQ_ENTER();
        RL(X1);	/* IN-IRQ */
        RU(X1);
        IRQ_EXIT();
      
      At which point it would report that because A is an IRQ-unsafe lock we
      can suffer the following inversion:
      
      	CPU0		CPU1
      
      	lock(A)
      			lock(X1)
      			lock(A)
      	<IRQ>
      	 lock(X1)
      
      And this is 'wrong' because X1 can recurse (assuming the above lock are
      in fact read-lock) but lockdep doesn't know about this.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Waiman Long <Waiman.Long@hp.com>
      Cc: ego@linux.vnet.ibm.com
      Cc: bp@alien8.de
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Link: http://lkml.kernel.org/r/20140930132600.GA7444@worktop.programming.kicks-ass.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      8acd91e8
  28. 24 Sep, 2014 1 commit
  29. 18 Sep, 2014 1 commit
    • Paul E. McKenney's avatar
      rcu: Eliminate deadlock between CPU hotplug and expedited grace periods · dd56af42
      Paul E. McKenney authored
      Currently, the expedited grace-period primitives do get_online_cpus().
      This greatly simplifies their implementation, but means that calls
      to them holding locks that are acquired by CPU-hotplug notifiers (to
      say nothing of calls to these primitives from CPU-hotplug notifiers)
      can deadlock.  But this is starting to become inconvenient, as can be
      seen here: https://lkml.org/lkml/2014/8/5/754.  The problem in this
      case is that some developers need to acquire a mutex from a CPU-hotplug
      notifier, but also need to hold it across a synchronize_rcu_expedited().
      As noted above, this currently results in deadlock.
      
      This commit avoids the deadlock and retains the simplicity by creating
      a try_get_online_cpus(), which returns false if the get_online_cpus()
      reference count could not immediately be incremented.  If a call to
      try_get_online_cpus() returns true, the expedited primitives operate as
      before.  If a call returns false, the expedited primitives fall back to
      normal grace-period operations.  This falling back of course results in
      increased grace-period latency, but only during times when CPU hotplug
      operations are actually in flight.  The effect should therefore be
      negligible during normal operation.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Tested-by: default avatarLan Tianyu <tianyu.lan@intel.com>
      dd56af42
  30. 13 Aug, 2014 2 commits
    • Waiman Long's avatar
      locking/lockdep: Restrict the use of recursive read_lock() with qrwlock · f0bab73c
      Waiman Long authored
      Unlike the original unfair rwlock implementation, queued rwlock
      will grant lock according to the chronological sequence of the lock
      requests except when the lock requester is in the interrupt context.
      Consequently, recursive read_lock calls will now hang the process if
      there is a write_lock call somewhere in between the read_lock calls.
      
      This patch updates the lockdep implementation to look for recursive
      read_lock calls. A new read state (3) is used to mark those read_lock
      call that cannot be recursively called except in the interrupt
      context. The new read state does exhaust the 2 bits available in
      held_lock:read bit field. The addition of any new read state in the
      future may require a redesign of how all those bits are squeezed
      together in the held_lock structure.
      Signed-off-by: default avatarWaiman Long <Waiman.Long@hp.com>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Scott J Norton <scott.norton@hp.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1407345722-61615-2-git-send-email-Waiman.Long@hp.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      f0bab73c
    • Davidlohr Bueso's avatar
      locking/Documentation: Move locking related docs into Documentation/locking/ · 214e0aed
      Davidlohr Bueso authored
      Specifically:
        Documentation/locking/lockdep-design.txt
        Documentation/locking/lockstat.txt
        Documentation/locking/mutex-design.txt
        Documentation/locking/rt-mutex-design.txt
        Documentation/locking/rt-mutex.txt
        Documentation/locking/spinlocks.txt
        Documentation/locking/ww-mutex-design.txt
      Signed-off-by: default avatarDavidlohr Bueso <davidlohr@hp.com>
      Acked-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Cc: jason.low2@hp.com
      Cc: aswin@hp.com
      Cc: Alexei Starovoitov <ast@plumgrid.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Chris Mason <clm@fb.com>
      Cc: Dan Streetman <ddstreet@ieee.org>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Davidlohr Bueso <davidlohr@hp.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Jason Low <jason.low2@hp.com>
      Cc: Josef Bacik <jbacik@fusionio.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Lubomir Rintel <lkundrak@v3.sk>
      Cc: Masanari Iida <standby24x7@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: fengguang.wu@intel.com
      Link: http://lkml.kernel.org/r/1406752916-3341-6-git-send-email-davidlohr@hp.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      214e0aed
  31. 14 Feb, 2014 1 commit
  32. 09 Feb, 2014 2 commits
    • Oleg Nesterov's avatar
      lockdep: Change lockdep_set_novalidate_class() to use _and_name · 47be1c1a
      Oleg Nesterov authored
      Cosmetic. This doesn't really matter because a) device->mutex is
      the only user of __lockdep_no_validate__ and b) this class should
      be never reported as the source of problem, but if something goes
      wrong "&dev->mutex" looks better than "&__lockdep_no_validate__"
      as the name of the lock.
      Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20140120182016.GA26512@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      47be1c1a
    • Oleg Nesterov's avatar
      lockdep: Make held_lock->check and "int check" argument bool · fb9edbe9
      Oleg Nesterov authored
      The "int check" argument of lock_acquire() and held_lock->check are
      misleading. This is actually a boolean: 2 means "true", everything
      else is "false".
      
      And there is no need to pass 1 or 0 to lock_acquire() depending on
      CONFIG_PROVE_LOCKING, __lock_acquire() checks prove_locking at the
      start and clears "check" if !CONFIG_PROVE_LOCKING.
      
      Note: probably we can simply kill this member/arg. The only explicit
      user of check => 0 is rcu_lock_acquire(), perhaps we can change it to
      use lock_acquire(trylock =>, read => 2). __lockdep_no_validate means
      check => 0 implicitly, but we can change validate_chain() to check
      hlock->instance->key instead. Not to mention it would be nice to get
      rid of lockdep_set_novalidate_class().
      Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20140120182006.GA26495@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      fb9edbe9
  33. 06 Nov, 2013 1 commit
  34. 12 Jul, 2013 1 commit