1. 05 Apr, 2019 2 commits
  2. 03 Apr, 2019 1 commit
  3. 01 Mar, 2019 1 commit
    • Arnd Bergmann's avatar
      kasan: turn off asan-stack for clang-8 and earlier · 6baec880
      Arnd Bergmann authored
      Building an arm64 allmodconfig kernel with clang results in over 140
      warnings about overly large stack frames, the worst ones being:
      
        drivers/gpu/drm/panel/panel-sitronix-st7789v.c:196:12: error: stack frame size of 20224 bytes in function 'st7789v_prepare'
        drivers/video/fbdev/omap2/omapfb/displays/panel-tpo-td028ttec1.c:196:12: error: stack frame size of 13120 bytes in function 'td028ttec1_panel_enable'
        drivers/usb/host/max3421-hcd.c:1395:1: error: stack frame size of 10048 bytes in function 'max3421_spi_thread'
        drivers/net/wan/slic_ds26522.c:209:12: error: stack frame size of 9664 bytes in function 'slic_ds26522_probe'
        drivers/crypto/ccp/ccp-ops.c:2434:5: error: stack frame size of 8832 bytes in function 'ccp_run_cmd'
        drivers/media/dvb-frontends/stv0367.c:1005:12: error: stack frame size of 7840 bytes in function 'stv0367ter_algo'
      
      None of these happen with gcc today, and almost all of these are the
      result of a single known issue in llvm.  Hopefully it will eventually
      get fixed with the clang-9 release.
      
      In the meantime, the best idea I have is to turn off asan-stack for
      clang-8 and earlier, so we can produce a kernel that is safe to run.
      
      I have posted three patches that address the frame overflow warnings
      that are not addressed by turning off asan-stack, so in combination with
      this change, we get much closer to a clean allmodconfig build, which in
      turn is necessary to do meaningful build regression testing.
      
      It is still possible to turn on the CONFIG_ASAN_STACK option on all
      versions of clang, and it's always enabled for gcc, but when
      CONFIG_COMPILE_TEST is set, the option remains invisible, so
      allmodconfig and randconfig builds (which are normally done with a
      forced CONFIG_COMPILE_TEST) will still result in a mostly clean build.
      
      Link: http://lkml.kernel.org/r/20190222222950.3997333-1-arnd@arndb.de
      Link: https://bugs.llvm.org/show_bug.cgi?id=38809Signed-off-by: 's avatarArnd Bergmann <arnd@arndb.de>
      Reviewed-by: 's avatarQian Cai <cai@lca.pw>
      Reviewed-by: 's avatarMark Brown <broonie@kernel.org>
      Acked-by: 's avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Kostya Serebryany <kcc@google.com>
      Cc: Andrey Konovalov <andreyknvl@google.com>
      Signed-off-by: 's avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: 's avatarLinus Torvalds <torvalds@linux-foundation.org>
      6baec880
  4. 15 Feb, 2019 2 commits
    • David Howells's avatar
      assoc_array: Fix shortcut creation · bb2ba2d7
      David Howells authored
      Fix the creation of shortcuts for which the length of the index key value
      is an exact multiple of the machine word size.  The problem is that the
      code that blanks off the unused bits of the shortcut value malfunctions if
      the number of bits in the last word equals machine word size.  This is due
      to the "<<" operator being given a shift of zero in this case, and so the
      mask that should be all zeros is all ones instead.  This causes the
      subsequent masking operation to clear everything rather than clearing
      nothing.
      
      Ordinarily, the presence of the hash at the beginning of the tree index key
      makes the issue very hard to test for, but in this case, it was encountered
      due to a development mistake that caused the hash output to be either 0
      (keyring) or 1 (non-keyring) only.  This made it susceptible to the
      keyctl/unlink/valid test in the keyutils package.
      
      The fix is simply to skip the blanking if the shift would be 0.  For
      example, an index key that is 64 bits long would produce a 0 shift and thus
      a 'blank' of all 1s.  This would then be inverted and AND'd onto the
      index_key, incorrectly clearing the entire last word.
      
      Fixes: 3cb98950 ("Add a generic associative array implementation.")
      Signed-off-by: 's avatarDavid Howells <dhowells@redhat.com>
      Signed-off-by: 's avatarJames Morris <james.morris@microsoft.com>
      bb2ba2d7
    • Miguel Ojeda's avatar
      lib/crc32.c: mark crc32_le_base/__crc32c_le_base aliases as __pure · ff98e20e
      Miguel Ojeda authored
      The upcoming GCC 9 release extends the -Wmissing-attributes warnings
      (enabled by -Wall) to C and aliases: it warns when particular function
      attributes are missing in the aliases but not in their target.
      
      In particular, it triggers here because crc32_le_base/__crc32c_le_base
      aren't __pure while their target crc32_le/__crc32c_le are.
      
      These aliases are used by architectures as a fallback in accelerated
      versions of CRC32. See commit 9784d82d ("lib/crc32: make core crc32()
      routines weak so they can be overridden").
      
      Therefore, being fallbacks, it is likely that even if the aliases
      were called from C, there wouldn't be any optimizations possible.
      Currently, the only user is arm64, which calls this from asm.
      
      Still, marking the aliases as __pure makes sense and is a good idea
      for documentation purposes and possible future optimizations,
      which also silences the warning.
      Acked-by: 's avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: Laura Abbott's avatarLaura Abbott <labbott@redhat.com>
      Signed-off-by: Miguel Ojeda's avatarMiguel Ojeda <miguel.ojeda.sandonis@gmail.com>
      ff98e20e
  5. 01 Feb, 2019 1 commit
  6. 31 Jan, 2019 1 commit
    • Bart Van Assche's avatar
      lib/test_rhashtable: Make test_insert_dup() allocate its hash table dynamically · fc42a689
      Bart Van Assche authored
      The test_insert_dup() function from lib/test_rhashtable.c passes a
      pointer to a stack object to rhltable_init(). Allocate the hash table
      dynamically to avoid that the following is reported with object
      debugging enabled:
      
      ODEBUG: object (ptrval) is on stack (ptrval), but NOT annotated.
      WARNING: CPU: 0 PID: 1 at lib/debugobjects.c:368 __debug_object_init+0x312/0x480
      Modules linked in:
      EIP: __debug_object_init+0x312/0x480
      Call Trace:
       ? debug_object_init+0x1a/0x20
       ? __init_work+0x16/0x30
       ? rhashtable_init+0x1e1/0x460
       ? sched_clock_cpu+0x57/0xe0
       ? rhltable_init+0xb/0x20
       ? test_insert_dup+0x32/0x20f
       ? trace_hardirqs_on+0x38/0xf0
       ? ida_dump+0x10/0x10
       ? jhash+0x130/0x130
       ? my_hashfn+0x30/0x30
       ? test_rht_init+0x6aa/0xab4
       ? ida_dump+0x10/0x10
       ? test_rhltable+0xc5c/0xc5c
       ? do_one_initcall+0x67/0x28e
       ? trace_hardirqs_off+0x22/0xe0
       ? restore_all_kernel+0xf/0x70
       ? trace_hardirqs_on_thunk+0xc/0x10
       ? restore_all_kernel+0xf/0x70
       ? kernel_init_freeable+0x142/0x213
       ? rest_init+0x230/0x230
       ? kernel_init+0x10/0x110
       ? schedule_tail_wrapper+0x9/0xc
       ? ret_from_fork+0x19/0x24
      
      Cc: Thomas Graf <tgraf@suug.ch>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: netdev@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: 's avatarBart Van Assche <bvanassche@acm.org>
      Acked-by: 's avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: 's avatarDavid S. Miller <davem@davemloft.net>
      fc42a689
  7. 20 Jan, 2019 1 commit
    • Florian La Roche's avatar
      fix int_sqrt64() for very large numbers · fbfaf851
      Florian La Roche authored
      If an input number x for int_sqrt64() has the highest bit set, then
      fls64(x) is 64.  (1UL << 64) is an overflow and breaks the algorithm.
      
      Subtracting 1 is a better guess for the initial value of m anyway and
      that's what also done in int_sqrt() implicitly [*].
      
      [*] Note how int_sqrt() uses __fls() with two underscores, which already
          returns the proper raw bit number.
      
          In contrast, int_sqrt64() used fls64(), and that returns bit numbers
          illogically starting at 1, because of error handling for the "no
          bits set" case. Will points out that he bug probably is due to a
          copy-and-paste error from the regular int_sqrt() case.
      Signed-off-by: Florian La Roche's avatarFlorian La Roche <Florian.LaRoche@googlemail.com>
      Acked-by: 's avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: 's avatarLinus Torvalds <torvalds@linux-foundation.org>
      fbfaf851
  8. 15 Jan, 2019 1 commit
    • Ming Lei's avatar
      sbitmap: Protect swap_lock from hardirq · fe76fc6a
      Ming Lei authored
      Because we may call blk_mq_get_driver_tag() directly from
      blk_mq_dispatch_rq_list() without holding any lock, then HARDIRQ may
      come and the above DEADLOCK is triggered.
      
      Commit ab53dcfb3e7b ("sbitmap: Protect swap_lock from hardirq") tries to
      fix this issue by using 'spin_lock_bh', which isn't enough because we
      complete request from hardirq context direclty in case of multiqueue.
      
      Cc: Clark Williams <williams@redhat.com>
      Fixes: ab53dcfb3e7b ("sbitmap: Protect swap_lock from hardirq")
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Ming Lei <ming.lei@redhat.com>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Cc: Steven Rostedt (VMware) <rostedt@goodmis.org>
      Signed-off-by: 's avatarMing Lei <ming.lei@redhat.com>
      Signed-off-by: 's avatarLinus Torvalds <torvalds@linux-foundation.org>
      fe76fc6a
  9. 14 Jan, 2019 2 commits
  10. 07 Jan, 2019 5 commits
    • Matthew Wilcox's avatar
      XArray: Honour reserved entries in xa_insert · b0606fed
      Matthew Wilcox authored
      xa_insert() should treat reserved entries as occupied, not as available.
      Also, it should treat requests to insert a NULL pointer as a request
      to reserve the slot.  Add xa_insert_bh() and xa_insert_irq() for
      completeness.
      Signed-off-by: 's avatarMatthew Wilcox <willy@infradead.org>
      b0606fed
    • Matthew Wilcox's avatar
      XArray: Permit storing 2-byte-aligned pointers · 76b4e529
      Matthew Wilcox authored
      On m68k, statically allocated pointers may only be two-byte aligned.
      This clashes with the XArray's method for tagging internal pointers.
      Permit storing these pointers in single slots (ie not in multislots).
      Signed-off-by: 's avatarMatthew Wilcox <willy@infradead.org>
      76b4e529
    • Matthew Wilcox's avatar
      XArray: Change xa_for_each iterator · 4a31896c
      Matthew Wilcox authored
      There were three problems with this API:
      1. It took too many arguments; almost all users wanted to iterate over
      every element in the array rather than a subset.
      2. It required that 'index' be initialised before use, and there's no
      realistic way to make GCC catch that.
      3. 'index' and 'entry' were the opposite way round from every other
      member of the XArray APIs.
      
      So split it into three different APIs:
      
      xa_for_each(xa, index, entry)
      xa_for_each_start(xa, index, entry, start)
      xa_for_each_marked(xa, index, entry, filter)
      Signed-off-by: 's avatarMatthew Wilcox <willy@infradead.org>
      4a31896c
    • Matthew Wilcox's avatar
      XArray: Turn xa_init_flags into a static inline · 02669b17
      Matthew Wilcox authored
      A regular xa_init_flags() put all dynamically-initialised XArrays into
      the same locking class.  That leads to lockdep believing that taking
      one XArray lock while holding another is a deadlock.  It's possible to
      work around some of these situations with separate locking classes for
      irq/bh/regular XArrays, and SINGLE_DEPTH_NESTING, but that's ugly, and
      it doesn't work for all situations (where we have completely unrelated
      XArrays).
      Signed-off-by: 's avatarMatthew Wilcox <willy@infradead.org>
      02669b17
    • Matthew Wilcox's avatar
      XArray tests: Add RCU locking · 490fd30f
      Matthew Wilcox authored
      0day picked up that I'd forgotten to add locking to this new test.
      Signed-off-by: 's avatarMatthew Wilcox <willy@infradead.org>
      490fd30f
  11. 06 Jan, 2019 2 commits
  12. 05 Jan, 2019 1 commit
    • Olof Johansson's avatar
      lib/genalloc.c: include vmalloc.h · 35004f2e
      Olof Johansson authored
      Fixes build break on most ARM/ARM64 defconfigs:
      
        lib/genalloc.c: In function 'gen_pool_add_virt':
        lib/genalloc.c:190:10: error: implicit declaration of function 'vzalloc_node'; did you mean 'kzalloc_node'?
        lib/genalloc.c:190:8: warning: assignment to 'struct gen_pool_chunk *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
        lib/genalloc.c: In function 'gen_pool_destroy':
        lib/genalloc.c:254:3: error: implicit declaration of function 'vfree'; did you mean 'kfree'?
      
      Fixes: 6862d2fc ('lib/genalloc.c: use vzalloc_node() to allocate the bitmap')
      Cc: Huang Shijie <sjhuang@iluvatar.ai>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Alexey Skidanov <alexey.skidanov@intel.com>
      Signed-off-by: Olof Johansson's avatarOlof Johansson <olof@lixom.net>
      Signed-off-by: 's avatarLinus Torvalds <torvalds@linux-foundation.org>
      35004f2e
  13. 04 Jan, 2019 5 commits
    • Huang Shijie's avatar
      lib/genalloc.c: use vzalloc_node() to allocate the bitmap · 6862d2fc
      Huang Shijie authored
      Some devices may have big memory on chip, such as over 1G.  In some
      cases, the nbytes maybe bigger then 4M which is the bounday of the
      memory buddy system (4K default).
      
      So use vzalloc_node() to allocate the bitmap.  Also use vfree to free
      it.
      
      Link: http://lkml.kernel.org/r/20181225015701.6289-1-sjhuang@iluvatar.aiSigned-off-by: 's avatarHuang Shijie <sjhuang@iluvatar.ai>
      Reviewed-by: 's avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Alexey Skidanov <alexey.skidanov@intel.com>
      Signed-off-by: 's avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: 's avatarLinus Torvalds <torvalds@linux-foundation.org>
      6862d2fc
    • Yury Norov's avatar
      lib/find_bit_benchmark.c: align test_find_next_and_bit with others · 439e00b7
      Yury Norov authored
      Contrary to other tests, test_find_next_and_bit() test uses tab
      formatting in output and get_cycles() instead of ktime_get().
      get_cycles() is not supported by some arches, so ktime_get() fits better
      in generic code.
      
      Fix it and minor style issues, so the output looks like this:
      
      Start testing find_bit() with random-filled bitmap
      find_next_bit:                 7142816 ns, 163282 iterations
      find_next_zero_bit:            8545712 ns, 164399 iterations
      find_last_bit:                 6332032 ns, 163282 iterations
      find_first_bit:               20509424 ns,  16606 iterations
      find_next_and_bit:             4060016 ns,  73424 iterations
      
      Start testing find_bit() with sparse bitmap
      find_next_bit:                   55984 ns,    656 iterations
      find_next_zero_bit:           19197536 ns, 327025 iterations
      find_last_bit:                   65088 ns,    656 iterations
      find_first_bit:                5923712 ns,    656 iterations
      find_next_and_bit:               29088 ns,      1 iterations
      
      Link: http://lkml.kernel.org/r/20181123174803.10916-1-ynorov@caviumnetworks.comSigned-off-by: 's avatarYury Norov <ynorov@caviumnetworks.com>
      Reviewed-by: 's avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: "Norov, Yuri" <Yuri.Norov@cavium.com>
      Cc: Clement Courbet <courbet@google.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Matthew Wilcox <mawilcox@microsoft.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: 's avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: 's avatarLinus Torvalds <torvalds@linux-foundation.org>
      439e00b7
    • Alexey Skidanov's avatar
      lib/genalloc.c: fix allocation of aligned buffer from non-aligned chunk · 52fbf113
      Alexey Skidanov authored
      gen_pool_alloc_algo() uses different allocation functions implementing
      different allocation algorithms.  With gen_pool_first_fit_align()
      allocation function, the returned address should be aligned on the
      requested boundary.
      
      If chunk start address isn't aligned on the requested boundary, the
      returned address isn't aligned too.  The only way to get properly
      aligned address is to initialize the pool with chunks aligned on the
      requested boundary.  If want to have an ability to allocate buffers
      aligned on different boundaries (for example, 4K, 1MB, ...), the chunk
      start address should be aligned on the max possible alignment.
      
      This happens because gen_pool_first_fit_align() looks for properly
      aligned memory block without taking into account the chunk start address
      alignment.
      
      To fix this, we provide chunk start address to
      gen_pool_first_fit_align() and change its implementation such that it
      starts looking for properly aligned block with appropriate offset
      (exactly as is done in CMA).
      
      Link: https://lkml.kernel.org/lkml/a170cf65-6884-3592-1de9-4c235888cc8a@intel.com
      Link: http://lkml.kernel.org/r/1541690953-4623-1-git-send-email-alexey.skidanov@intel.comSigned-off-by: 's avatarAlexey Skidanov <alexey.skidanov@intel.com>
      Reviewed-by: 's avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Logan Gunthorpe <logang@deltatee.com>
      Cc: Daniel Mentz <danielmentz@google.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Laura Abbott <labbott@redhat.com>
      Signed-off-by: 's avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: 's avatarLinus Torvalds <torvalds@linux-foundation.org>
      52fbf113
    • Linus Torvalds's avatar
      make 'user_access_begin()' do 'access_ok()' · 594cc251
      Linus Torvalds authored
      Originally, the rule used to be that you'd have to do access_ok()
      separately, and then user_access_begin() before actually doing the
      direct (optimized) user access.
      
      But experience has shown that people then decide not to do access_ok()
      at all, and instead rely on it being implied by other operations or
      similar.  Which makes it very hard to verify that the access has
      actually been range-checked.
      
      If you use the unsafe direct user accesses, hardware features (either
      SMAP - Supervisor Mode Access Protection - on x86, or PAN - Privileged
      Access Never - on ARM) do force you to use user_access_begin().  But
      nothing really forces the range check.
      
      By putting the range check into user_access_begin(), we actually force
      people to do the right thing (tm), and the range check vill be visible
      near the actual accesses.  We have way too long a history of people
      trying to avoid them.
      Signed-off-by: 's avatarLinus Torvalds <torvalds@linux-foundation.org>
      594cc251
    • Linus Torvalds's avatar
      Remove 'type' argument from access_ok() function · 96d4f267
      Linus Torvalds authored
      Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument
      of the user address range verification function since we got rid of the
      old racy i386-only code to walk page tables by hand.
      
      It existed because the original 80386 would not honor the write protect
      bit when in kernel mode, so you had to do COW by hand before doing any
      user access.  But we haven't supported that in a long time, and these
      days the 'type' argument is a purely historical artifact.
      
      A discussion about extending 'user_access_begin()' to do the range
      checking resulted this patch, because there is no way we're going to
      move the old VERIFY_xyz interface to that model.  And it's best done at
      the end of the merge window when I've done most of my merges, so let's
      just get this done once and for all.
      
      This patch was mostly done with a sed-script, with manual fix-ups for
      the cases that weren't of the trivial 'access_ok(VERIFY_xyz' form.
      
      There were a couple of notable cases:
      
       - csky still had the old "verify_area()" name as an alias.
      
       - the iter_iov code had magical hardcoded knowledge of the actual
         values of VERIFY_{READ,WRITE} (not that they mattered, since nothing
         really used it)
      
       - microblaze used the type argument for a debug printout
      
      but other than those oddities this should be a total no-op patch.
      
      I tried to fix up all architectures, did fairly extensive grepping for
      access_ok() uses, and the changes are trivial, but I may have missed
      something.  Any missed conversion should be trivially fixable, though.
      Signed-off-by: 's avatarLinus Torvalds <torvalds@linux-foundation.org>
      96d4f267
  14. 29 Dec, 2018 1 commit
  15. 28 Dec, 2018 8 commits
  16. 22 Dec, 2018 2 commits
  17. 21 Dec, 2018 1 commit
  18. 20 Dec, 2018 3 commits