1. 30 Dec, 2018 5 commits
    • Nicholas Mc Guire's avatar
      kdb: use bool for binary state indicators · 7faedcd4
      Nicholas Mc Guire authored
      defcmd_in_progress  is the state trace for command group processing
      - within a command group or not -  usable  is an indicator if a command
      set is valid (allocated/non-empty) - so use a bool for those binary
      indication here.
      Signed-off-by: default avatarNicholas Mc Guire <hofrat@osadl.org>
      Reviewed-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      7faedcd4
    • Douglas Anderson's avatar
      kdb: Don't back trace on a cpu that didn't round up · 162bc7f5
      Douglas Anderson authored
      If you have a CPU that fails to round up and then run 'btc' you'll end
      up crashing in kdb becaue we dereferenced NULL.  Let's add a check.
      It's wise to also set the task to NULL when leaving the debugger so
      that if we fail to round up on a later entry into the debugger we
      won't backtrace a stale task.
      Signed-off-by: default avatarDouglas Anderson <dianders@chromium.org>
      Acked-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      162bc7f5
    • Douglas Anderson's avatar
      kgdb: Don't round up a CPU that failed rounding up before · 87b09592
      Douglas Anderson authored
      If we're using the default implementation of kgdb_roundup_cpus() that
      uses smp_call_function_single_async() we can end up hanging
      kgdb_roundup_cpus() if we try to round up a CPU that failed to round
      up before.
      
      Specifically smp_call_function_single_async() will try to wait on the
      csd lock for the CPU that we're trying to round up.  If the previous
      round up never finished then that lock could still be held and we'll
      just sit there hanging.
      
      There's not a lot of use trying to round up a CPU that failed to round
      up before.  Let's keep a flag that indicates whether the CPU started
      but didn't finish to round up before.  If we see that flag set then
      we'll skip the next round up.
      
      In general we have a few goals here:
      - We never want to end up calling smp_call_function_single_async()
        when the csd is still locked.  This is accomplished because
        flush_smp_call_function_queue() unlocks the csd _before_ invoking
        the callback.  That means that when kgdb_nmicallback() runs we know
        for sure the the csd is no longer locked.  Thus when we set
        "rounding_up = false" we know for sure that the csd is unlocked.
      - If there are no timeouts rounding up we should never skip a round
        up.
      
      NOTE #1: In general trying to continue running after failing to round
      up CPUs doesn't appear to be supported in the debugger.  When I
      simulate this I find that kdb reports "Catastrophic error detected"
      when I try to continue.  I can overrule and continue anyway, but it
      should be noted that we may be entering the land of dragons here.
      Possibly the "Catastrophic error detected" was added _because_ of the
      future failure to round up, but even so this is an area of the code
      that hasn't been strongly tested.
      
      NOTE #2: I did a bit of testing before and after this change.  I
      introduced a 10 second hang in the kernel while holding a spinlock
      that I could invoke on a certain CPU with 'taskset -c 3 cat /sys/...".
      
      Before this change if I did:
      - Invoke hang
      - Enter debugger
      - g (which warns about Catastrophic error, g again to go anyway)
      - g
      - Enter debugger
      
      ...I'd hang the rest of the 10 seconds without getting a debugger
      prompt.  After this change I end up in the debugger the 2nd time after
      only 1 second with the standard warning about 'Timed out waiting for
      secondary CPUs.'
      
      I'll also note that once the CPU finished waiting I could actually
      debug it (aka "btc" worked)
      
      I won't promise that everything works perfectly if the errant CPU
      comes back at just the wrong time (like as we're entering or exiting
      the debugger) but it certainly seems like an improvement.
      
      NOTE #3: setting 'kgdb_info[cpu].rounding_up = false' is in
      kgdb_nmicallback() instead of kgdb_call_nmi_hook() because some
      implementations override kgdb_call_nmi_hook().  It shouldn't hurt to
      have it in kgdb_nmicallback() in any case.
      
      NOTE #4: this logic is really only needed because there is no API call
      like "smp_try_call_function_single_async()" or "smp_csd_is_locked()".
      If such an API existed then we'd use it instead, but it seemed a bit
      much to add an API like this just for kgdb.
      Signed-off-by: default avatarDouglas Anderson <dianders@chromium.org>
      Acked-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      87b09592
    • Douglas Anderson's avatar
      kgdb: Fix kgdb_roundup_cpus() for arches who used smp_call_function() · 3cd99ac3
      Douglas Anderson authored
      When I had lockdep turned on and dropped into kgdb I got a nice splat
      on my system.  Specifically it hit:
        DEBUG_LOCKS_WARN_ON(current->hardirq_context)
      
      Specifically it looked like this:
        sysrq: SysRq : DEBUG
        ------------[ cut here ]------------
        DEBUG_LOCKS_WARN_ON(current->hardirq_context)
        WARNING: CPU: 0 PID: 0 at .../kernel/locking/lockdep.c:2875 lockdep_hardirqs_on+0xf0/0x160
        CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.19.0 #27
        pstate: 604003c9 (nZCv DAIF +PAN -UAO)
        pc : lockdep_hardirqs_on+0xf0/0x160
        ...
        Call trace:
         lockdep_hardirqs_on+0xf0/0x160
         trace_hardirqs_on+0x188/0x1ac
         kgdb_roundup_cpus+0x14/0x3c
         kgdb_cpu_enter+0x53c/0x5cc
         kgdb_handle_exception+0x180/0x1d4
         kgdb_compiled_brk_fn+0x30/0x3c
         brk_handler+0x134/0x178
         do_debug_exception+0xfc/0x178
         el1_dbg+0x18/0x78
         kgdb_breakpoint+0x34/0x58
         sysrq_handle_dbg+0x54/0x5c
         __handle_sysrq+0x114/0x21c
         handle_sysrq+0x30/0x3c
         qcom_geni_serial_isr+0x2dc/0x30c
        ...
        ...
        irq event stamp: ...45
        hardirqs last  enabled at (...44): [...] __do_softirq+0xd8/0x4e4
        hardirqs last disabled at (...45): [...] el1_irq+0x74/0x130
        softirqs last  enabled at (...42): [...] _local_bh_enable+0x2c/0x34
        softirqs last disabled at (...43): [...] irq_exit+0xa8/0x100
        ---[ end trace adf21f830c46e638 ]---
      
      Looking closely at it, it seems like a really bad idea to be calling
      local_irq_enable() in kgdb_roundup_cpus().  If nothing else that seems
      like it could violate spinlock semantics and cause a deadlock.
      
      Instead, let's use a private csd alongside
      smp_call_function_single_async() to round up the other CPUs.  Using
      smp_call_function_single_async() doesn't require interrupts to be
      enabled so we can remove the offending bit of code.
      
      In order to avoid duplicating this across all the architectures that
      use the default kgdb_roundup_cpus(), we'll add a "weak" implementation
      to debug_core.c.
      
      Looking at all the people who previously had copies of this code,
      there were a few variants.  I've attempted to keep the variants
      working like they used to.  Specifically:
      * For arch/arc we passed NULL to kgdb_nmicallback() instead of
        get_irq_regs().
      * For arch/mips there was a bit of extra code around
        kgdb_nmicallback()
      
      NOTE: In this patch we will still get into trouble if we try to round
      up a CPU that failed to round up before.  We'll try to round it up
      again and potentially hang when we try to grab the csd lock.  That's
      not new behavior but we'll still try to do better in a future patch.
      Suggested-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: default avatarDouglas Anderson <dianders@chromium.org>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Rich Felker <dalias@libc.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      3cd99ac3
    • Douglas Anderson's avatar
      kgdb: Remove irq flags from roundup · 9ef7fa50
      Douglas Anderson authored
      The function kgdb_roundup_cpus() was passed a parameter that was
      documented as:
      
      > the flags that will be used when restoring the interrupts. There is
      > local_irq_save() call before kgdb_roundup_cpus().
      
      Nobody used those flags.  Anyone who wanted to temporarily turn on
      interrupts just did local_irq_enable() and local_irq_disable() without
      looking at them.  So we can definitely remove the flags.
      Signed-off-by: default avatarDouglas Anderson <dianders@chromium.org>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Rich Felker <dalias@libc.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      9ef7fa50
  2. 13 Nov, 2018 6 commits
  3. 26 Oct, 2018 1 commit
  4. 12 Jun, 2018 2 commits
    • Kees Cook's avatar
      treewide: kzalloc() -> kcalloc() · 6396bb22
      Kees Cook authored
      The kzalloc() function has a 2-factor argument form, kcalloc(). This
      patch replaces cases of:
      
              kzalloc(a * b, gfp)
      
      with:
              kcalloc(a * b, gfp)
      
      as well as handling cases of:
      
              kzalloc(a * b * c, gfp)
      
      with:
      
              kzalloc(array3_size(a, b, c), gfp)
      
      as it's slightly less ugly than:
      
              kzalloc_array(array_size(a, b), c, gfp)
      
      This does, however, attempt to ignore constant size factors like:
      
              kzalloc(4 * 1024, gfp)
      
      though any constants defined via macros get caught up in the conversion.
      
      Any factors with a sizeof() of "unsigned char", "char", and "u8" were
      dropped, since they're redundant.
      
      The Coccinelle script used for this was:
      
      // Fix redundant parens around sizeof().
      @@
      type TYPE;
      expression THING, E;
      @@
      
      (
        kzalloc(
      -	(sizeof(TYPE)) * E
      +	sizeof(TYPE) * E
        , ...)
      |
        kzalloc(
      -	(sizeof(THING)) * E
      +	sizeof(THING) * E
        , ...)
      )
      
      // Drop single-byte sizes and redundant parens.
      @@
      expression COUNT;
      typedef u8;
      typedef __u8;
      @@
      
      (
        kzalloc(
      -	sizeof(u8) * (COUNT)
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(__u8) * (COUNT)
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(char) * (COUNT)
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(unsigned char) * (COUNT)
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(u8) * COUNT
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(__u8) * COUNT
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(char) * COUNT
      +	COUNT
        , ...)
      |
        kzalloc(
      -	sizeof(unsigned char) * COUNT
      +	COUNT
        , ...)
      )
      
      // 2-factor product with sizeof(type/expression) and identifier or constant.
      @@
      type TYPE;
      expression THING;
      identifier COUNT_ID;
      constant COUNT_CONST;
      @@
      
      (
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * (COUNT_ID)
      +	COUNT_ID, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * COUNT_ID
      +	COUNT_ID, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * (COUNT_CONST)
      +	COUNT_CONST, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * COUNT_CONST
      +	COUNT_CONST, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * (COUNT_ID)
      +	COUNT_ID, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * COUNT_ID
      +	COUNT_ID, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * (COUNT_CONST)
      +	COUNT_CONST, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * COUNT_CONST
      +	COUNT_CONST, sizeof(THING)
        , ...)
      )
      
      // 2-factor product, only identifiers.
      @@
      identifier SIZE, COUNT;
      @@
      
      - kzalloc
      + kcalloc
        (
      -	SIZE * COUNT
      +	COUNT, SIZE
        , ...)
      
      // 3-factor product with 1 sizeof(type) or sizeof(expression), with
      // redundant parens removed.
      @@
      expression THING;
      identifier STRIDE, COUNT;
      type TYPE;
      @@
      
      (
        kzalloc(
      -	sizeof(TYPE) * (COUNT) * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE) * (COUNT) * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE) * COUNT * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE) * COUNT * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kzalloc(
      -	sizeof(THING) * (COUNT) * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      |
        kzalloc(
      -	sizeof(THING) * (COUNT) * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      |
        kzalloc(
      -	sizeof(THING) * COUNT * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      |
        kzalloc(
      -	sizeof(THING) * COUNT * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      )
      
      // 3-factor product with 2 sizeof(variable), with redundant parens removed.
      @@
      expression THING1, THING2;
      identifier COUNT;
      type TYPE1, TYPE2;
      @@
      
      (
        kzalloc(
      -	sizeof(TYPE1) * sizeof(TYPE2) * COUNT
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE1) * sizeof(THING2) * (COUNT)
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
        , ...)
      |
        kzalloc(
      -	sizeof(THING1) * sizeof(THING2) * COUNT
      +	array3_size(COUNT, sizeof(THING1), sizeof(THING2))
        , ...)
      |
        kzalloc(
      -	sizeof(THING1) * sizeof(THING2) * (COUNT)
      +	array3_size(COUNT, sizeof(THING1), sizeof(THING2))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE1) * sizeof(THING2) * COUNT
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
        , ...)
      |
        kzalloc(
      -	sizeof(TYPE1) * sizeof(THING2) * (COUNT)
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
        , ...)
      )
      
      // 3-factor product, only identifiers, with redundant parens removed.
      @@
      identifier STRIDE, SIZE, COUNT;
      @@
      
      (
        kzalloc(
      -	(COUNT) * STRIDE * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	COUNT * (STRIDE) * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	COUNT * STRIDE * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	(COUNT) * (STRIDE) * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	COUNT * (STRIDE) * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	(COUNT) * STRIDE * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	(COUNT) * (STRIDE) * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kzalloc(
      -	COUNT * STRIDE * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      )
      
      // Any remaining multi-factor products, first at least 3-factor products,
      // when they're not all constants...
      @@
      expression E1, E2, E3;
      constant C1, C2, C3;
      @@
      
      (
        kzalloc(C1 * C2 * C3, ...)
      |
        kzalloc(
      -	(E1) * E2 * E3
      +	array3_size(E1, E2, E3)
        , ...)
      |
        kzalloc(
      -	(E1) * (E2) * E3
      +	array3_size(E1, E2, E3)
        , ...)
      |
        kzalloc(
      -	(E1) * (E2) * (E3)
      +	array3_size(E1, E2, E3)
        , ...)
      |
        kzalloc(
      -	E1 * E2 * E3
      +	array3_size(E1, E2, E3)
        , ...)
      )
      
      // And then all remaining 2 factors products when they're not all constants,
      // keeping sizeof() as the second factor argument.
      @@
      expression THING, E1, E2;
      type TYPE;
      constant C1, C2, C3;
      @@
      
      (
        kzalloc(sizeof(THING) * C2, ...)
      |
        kzalloc(sizeof(TYPE) * C2, ...)
      |
        kzalloc(C1 * C2 * C3, ...)
      |
        kzalloc(C1 * C2, ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * (E2)
      +	E2, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(TYPE) * E2
      +	E2, sizeof(TYPE)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * (E2)
      +	E2, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	sizeof(THING) * E2
      +	E2, sizeof(THING)
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	(E1) * E2
      +	E1, E2
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	(E1) * (E2)
      +	E1, E2
        , ...)
      |
      - kzalloc
      + kcalloc
        (
      -	E1 * E2
      +	E1, E2
        , ...)
      )
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      6396bb22
    • Kees Cook's avatar
      treewide: kmalloc() -> kmalloc_array() · 6da2ec56
      Kees Cook authored
      The kmalloc() function has a 2-factor argument form, kmalloc_array(). This
      patch replaces cases of:
      
              kmalloc(a * b, gfp)
      
      with:
              kmalloc_array(a * b, gfp)
      
      as well as handling cases of:
      
              kmalloc(a * b * c, gfp)
      
      with:
      
              kmalloc(array3_size(a, b, c), gfp)
      
      as it's slightly less ugly than:
      
              kmalloc_array(array_size(a, b), c, gfp)
      
      This does, however, attempt to ignore constant size factors like:
      
              kmalloc(4 * 1024, gfp)
      
      though any constants defined via macros get caught up in the conversion.
      
      Any factors with a sizeof() of "unsigned char", "char", and "u8" were
      dropped, since they're redundant.
      
      The tools/ directory was manually excluded, since it has its own
      implementation of kmalloc().
      
      The Coccinelle script used for this was:
      
      // Fix redundant parens around sizeof().
      @@
      type TYPE;
      expression THING, E;
      @@
      
      (
        kmalloc(
      -	(sizeof(TYPE)) * E
      +	sizeof(TYPE) * E
        , ...)
      |
        kmalloc(
      -	(sizeof(THING)) * E
      +	sizeof(THING) * E
        , ...)
      )
      
      // Drop single-byte sizes and redundant parens.
      @@
      expression COUNT;
      typedef u8;
      typedef __u8;
      @@
      
      (
        kmalloc(
      -	sizeof(u8) * (COUNT)
      +	COUNT
        , ...)
      |
        kmalloc(
      -	sizeof(__u8) * (COUNT)
      +	COUNT
        , ...)
      |
        kmalloc(
      -	sizeof(char) * (COUNT)
      +	COUNT
        , ...)
      |
        kmalloc(
      -	sizeof(unsigned char) * (COUNT)
      +	COUNT
        , ...)
      |
        kmalloc(
      -	sizeof(u8) * COUNT
      +	COUNT
        , ...)
      |
        kmalloc(
      -	sizeof(__u8) * COUNT
      +	COUNT
        , ...)
      |
        kmalloc(
      -	sizeof(char) * COUNT
      +	COUNT
        , ...)
      |
        kmalloc(
      -	sizeof(unsigned char) * COUNT
      +	COUNT
        , ...)
      )
      
      // 2-factor product with sizeof(type/expression) and identifier or constant.
      @@
      type TYPE;
      expression THING;
      identifier COUNT_ID;
      constant COUNT_CONST;
      @@
      
      (
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(TYPE) * (COUNT_ID)
      +	COUNT_ID, sizeof(TYPE)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(TYPE) * COUNT_ID
      +	COUNT_ID, sizeof(TYPE)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(TYPE) * (COUNT_CONST)
      +	COUNT_CONST, sizeof(TYPE)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(TYPE) * COUNT_CONST
      +	COUNT_CONST, sizeof(TYPE)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(THING) * (COUNT_ID)
      +	COUNT_ID, sizeof(THING)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(THING) * COUNT_ID
      +	COUNT_ID, sizeof(THING)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(THING) * (COUNT_CONST)
      +	COUNT_CONST, sizeof(THING)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(THING) * COUNT_CONST
      +	COUNT_CONST, sizeof(THING)
        , ...)
      )
      
      // 2-factor product, only identifiers.
      @@
      identifier SIZE, COUNT;
      @@
      
      - kmalloc
      + kmalloc_array
        (
      -	SIZE * COUNT
      +	COUNT, SIZE
        , ...)
      
      // 3-factor product with 1 sizeof(type) or sizeof(expression), with
      // redundant parens removed.
      @@
      expression THING;
      identifier STRIDE, COUNT;
      type TYPE;
      @@
      
      (
        kmalloc(
      -	sizeof(TYPE) * (COUNT) * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kmalloc(
      -	sizeof(TYPE) * (COUNT) * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kmalloc(
      -	sizeof(TYPE) * COUNT * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kmalloc(
      -	sizeof(TYPE) * COUNT * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(TYPE))
        , ...)
      |
        kmalloc(
      -	sizeof(THING) * (COUNT) * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      |
        kmalloc(
      -	sizeof(THING) * (COUNT) * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      |
        kmalloc(
      -	sizeof(THING) * COUNT * (STRIDE)
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      |
        kmalloc(
      -	sizeof(THING) * COUNT * STRIDE
      +	array3_size(COUNT, STRIDE, sizeof(THING))
        , ...)
      )
      
      // 3-factor product with 2 sizeof(variable), with redundant parens removed.
      @@
      expression THING1, THING2;
      identifier COUNT;
      type TYPE1, TYPE2;
      @@
      
      (
        kmalloc(
      -	sizeof(TYPE1) * sizeof(TYPE2) * COUNT
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
        , ...)
      |
        kmalloc(
      -	sizeof(TYPE1) * sizeof(THING2) * (COUNT)
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
        , ...)
      |
        kmalloc(
      -	sizeof(THING1) * sizeof(THING2) * COUNT
      +	array3_size(COUNT, sizeof(THING1), sizeof(THING2))
        , ...)
      |
        kmalloc(
      -	sizeof(THING1) * sizeof(THING2) * (COUNT)
      +	array3_size(COUNT, sizeof(THING1), sizeof(THING2))
        , ...)
      |
        kmalloc(
      -	sizeof(TYPE1) * sizeof(THING2) * COUNT
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
        , ...)
      |
        kmalloc(
      -	sizeof(TYPE1) * sizeof(THING2) * (COUNT)
      +	array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
        , ...)
      )
      
      // 3-factor product, only identifiers, with redundant parens removed.
      @@
      identifier STRIDE, SIZE, COUNT;
      @@
      
      (
        kmalloc(
      -	(COUNT) * STRIDE * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kmalloc(
      -	COUNT * (STRIDE) * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kmalloc(
      -	COUNT * STRIDE * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kmalloc(
      -	(COUNT) * (STRIDE) * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kmalloc(
      -	COUNT * (STRIDE) * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kmalloc(
      -	(COUNT) * STRIDE * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kmalloc(
      -	(COUNT) * (STRIDE) * (SIZE)
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      |
        kmalloc(
      -	COUNT * STRIDE * SIZE
      +	array3_size(COUNT, STRIDE, SIZE)
        , ...)
      )
      
      // Any remaining multi-factor products, first at least 3-factor products,
      // when they're not all constants...
      @@
      expression E1, E2, E3;
      constant C1, C2, C3;
      @@
      
      (
        kmalloc(C1 * C2 * C3, ...)
      |
        kmalloc(
      -	(E1) * E2 * E3
      +	array3_size(E1, E2, E3)
        , ...)
      |
        kmalloc(
      -	(E1) * (E2) * E3
      +	array3_size(E1, E2, E3)
        , ...)
      |
        kmalloc(
      -	(E1) * (E2) * (E3)
      +	array3_size(E1, E2, E3)
        , ...)
      |
        kmalloc(
      -	E1 * E2 * E3
      +	array3_size(E1, E2, E3)
        , ...)
      )
      
      // And then all remaining 2 factors products when they're not all constants,
      // keeping sizeof() as the second factor argument.
      @@
      expression THING, E1, E2;
      type TYPE;
      constant C1, C2, C3;
      @@
      
      (
        kmalloc(sizeof(THING) * C2, ...)
      |
        kmalloc(sizeof(TYPE) * C2, ...)
      |
        kmalloc(C1 * C2 * C3, ...)
      |
        kmalloc(C1 * C2, ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(TYPE) * (E2)
      +	E2, sizeof(TYPE)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(TYPE) * E2
      +	E2, sizeof(TYPE)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(THING) * (E2)
      +	E2, sizeof(THING)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	sizeof(THING) * E2
      +	E2, sizeof(THING)
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	(E1) * E2
      +	E1, E2
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	(E1) * (E2)
      +	E1, E2
        , ...)
      |
      - kmalloc
      + kmalloc_array
        (
      -	E1 * E2
      +	E1, E2
        , ...)
      )
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      6da2ec56
  5. 05 Feb, 2018 1 commit
    • Arnd Bergmann's avatar
      kdb: use memmove instead of overlapping memcpy · 2cf2f0d5
      Arnd Bergmann authored
      gcc discovered that the memcpy() arguments in kdbnearsym() overlap, so
      we should really use memmove(), which is defined to handle that correctly:
      
      In function 'memcpy',
          inlined from 'kdbnearsym' at /git/arm-soc/kernel/debug/kdb/kdb_support.c:132:4:
      /git/arm-soc/include/linux/string.h:353:9: error: '__builtin_memcpy' accessing 792 bytes at offsets 0 and 8 overlaps 784 bytes at offset 8 [-Werror=restrict]
        return __builtin_memcpy(p, q, size);
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarJason Wessel <jason.wessel@windriver.com>
      2cf2f0d5
  6. 01 Feb, 2018 1 commit
  7. 25 Jan, 2018 4 commits
    • Randy Dunlap's avatar
      kdb: bl: don't use tab character in output · 33f765f6
      Randy Dunlap authored
      The "bl" (list breakpoints) command prints a '\t' (tab) character
      in its output, but on a console (video device), that just prints
      some odd graphics character. Instead of printing a tab character,
      just align the output with spaces.
      Signed-off-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Cc: Daniel Thompson <daniel.thompson@linaro.org>
      Cc: Jason Wessel <jason.wessel@windriver.com>
      Cc: kgdb-bugreport@lists.sourceforge.net
      Signed-off-by: default avatarJason Wessel <jason.wessel@windriver.com>
      33f765f6
    • Randy Dunlap's avatar
      kdb: drop newline in unknown command output · b0f73bc7
      Randy Dunlap authored
      When an unknown command is entered, kdb prints "Unknown kdb command:"
      and then the unknown text, including the newline character. This
      causes the ending single-quote mark to be printed on the next line
      by itself, so just change the ending newline character to a null
      character (end of string) so that it won't be "printed."
      Signed-off-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Cc: Daniel Thompson <daniel.thompson@linaro.org>
      Cc: Jason Wessel <jason.wessel@windriver.com>
      Cc: kgdb-bugreport@lists.sourceforge.net
      Signed-off-by: default avatarJason Wessel <jason.wessel@windriver.com>
      b0f73bc7
    • Randy Dunlap's avatar
      kdb: make "mdr" command repeat · 1e0ce03b
      Randy Dunlap authored
      The "mdr" command should repeat (continue) when only Enter/Return
      is pressed, so make it do so.
      Signed-off-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Cc: Daniel Thompson <daniel.thompson@linaro.org>
      Cc: Jason Wessel <jason.wessel@windriver.com>
      Cc: kgdb-bugreport@lists.sourceforge.net
      Signed-off-by: default avatarJason Wessel <jason.wessel@windriver.com>
      1e0ce03b
    • Arnd Bergmann's avatar
      kdb: use __ktime_get_real_seconds instead of __current_kernel_time · 6909e29f
      Arnd Bergmann authored
      kdb is the only user of the __current_kernel_time() interface, which is
      not y2038 safe and should be removed at some point.
      
      The kdb code also goes to great lengths to print the time in a
      human-readable format from 'struct timespec', again using a non-y2038-safe
      re-implementation of the generic time_to_tm() code.
      
      Using __current_kernel_time() here is necessary since the regular
      accessors that require a sequence lock might hang when called during the
      xtime update. However, this is safe in the particular case since kdb is
      only interested in the tv_sec field that is updated atomically.
      
      In order to make this y2038-safe, I'm converting the code to the generic
      time64_to_tm helper, but that introduces the problem that we have no
      interface like __current_kernel_time() that provides a 64-bit timestamp
      in a lockless, safe and architecture-independent way. I have multiple
      ideas for how to solve that:
      
      - __ktime_get_real_seconds() is lockless, but can return
        incorrect results on 32-bit architectures in the special case that
        we are in the process of changing the time across the epoch, either
        during the timer tick that overflows the seconds in 2038, or while
        calling settimeofday.
      
      - ktime_get_real_fast_ns() would work in this context, but does
        require a call into the clocksource driver to return a high-resolution
        timestamp. This may have undesired side-effects in the debugger,
        since we want to limit the interactions with the rest of the kernel.
      
      - Adding a ktime_get_real_fast_seconds() based on tk_fast_mono
        plus tkr->base_real without the tk_clock_read() delta. Not sure about
        the value of adding yet another interface here.
      
      - Changing the existing ktime_get_real_seconds() to use
        tk_fast_mono on 32-bit architectures rather than xtime_sec.  I think
        this could work, but am not entirely sure if this is an improvement.
      
      I picked the first of those for simplicity here. It's technically
      not correct but probably good enough as the time is only used for the
      debugging output and the race will likely never be hit in practice.
      Another downside is having to move the declaration into a public header
      file.
      
      Let me know if anyone has a different preference.
      
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Link: https://patchwork.kernel.org/patch/9775309/Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarJason Wessel <jason.wessel@windriver.com>
      6909e29f
  8. 04 Jan, 2018 1 commit
    • Eric W. Biederman's avatar
      signal: Simplify and fix kdb_send_sig · 0b44bf9a
      Eric W. Biederman authored
      - Rename from kdb_send_sig_info to kdb_send_sig
        As there is no meaningful siginfo sent
      
      - Use SEND_SIG_PRIV instead of generating a siginfo for a kdb
        signal.  The generated siginfo had a bogus rationale and was
        not correct in the face of pid namespaces.  SEND_SIG_PRIV
        is simpler and actually correct.
      
      - As the code grabs siglock just send the signal with siglock
        held instead of dropping siglock and attempting to grab it again.
      
      - Move the sig_valid test into kdb_kill where it can generate
        a good error message.
      Signed-off-by: default avatarEric W. Biederman <ebiederm@xmission.com>
      0b44bf9a
  9. 06 Dec, 2017 1 commit
  10. 02 Mar, 2017 6 commits
  11. 15 Dec, 2016 4 commits
    • Petr Mladek's avatar
      kdb: call vkdb_printf() from vprintk_default() only when wanted · 34aaff40
      Petr Mladek authored
      kdb_trap_printk allows to pass normal printk() messages to kdb via
      vkdb_printk().  For example, it is used to get backtrace using the
      classic show_stack(), see kdb_show_stack().
      
      vkdb_printf() tries to avoid a potential infinite loop by disabling the
      trap.  But this approach is racy, for example:
      
      CPU1					CPU2
      
      vkdb_printf()
        // assume that kdb_trap_printk == 0
        saved_trap_printk = kdb_trap_printk;
        kdb_trap_printk = 0;
      
      					kdb_show_stack()
      					  kdb_trap_printk++;
      
      Problem1: Now, a nested printk() on CPU0 calls vkdb_printf()
      	  even when it should have been disabled. It will not
      	  cause a deadlock but...
      
         // using the outdated saved value: 0
         kdb_trap_printk = saved_trap_printk;
      
      					  kdb_trap_printk--;
      
      Problem2: Now, kdb_trap_printk == -1 and will stay like this.
         It means that all messages will get passed to kdb from
         now on.
      
      This patch removes the racy saved_trap_printk handling.  Instead, the
      recursion is prevented by a check for the locked CPU.
      
      The solution is still kind of racy.  A non-related printk(), from
      another process, might get trapped by vkdb_printf().  And the wanted
      printk() might not get trapped because kdb_printf_cpu is assigned.  But
      this problem existed even with the original code.
      
      A proper solution would be to get_cpu() before setting kdb_trap_printk
      and trap messages only from this CPU.  I am not sure if it is worth the
      effort, though.
      
      In fact, the race is very theoretical.  When kdb is running any of the
      commands that use kdb_trap_printk there is a single active CPU and the
      other CPUs should be in a holding pen inside kgdb_cpu_enter().
      
      The only time this is violated is when there is a timeout waiting for
      the other CPUs to report to the holding pen.
      
      Finally, note that the situation is a bit schizophrenic.  vkdb_printf()
      explicitly allows recursion but only from KDB code that calls
      kdb_printf() directly.  On the other hand, the generic printk()
      recursion is not allowed because it might cause an infinite loop.  This
      is why we could not hide the decision inside vkdb_printf() easily.
      
      Link: http://lkml.kernel.org/r/1480412276-16690-4-git-send-email-pmladek@suse.comSigned-off-by: default avatarPetr Mladek <pmladek@suse.com>
      Cc: Daniel Thompson <daniel.thompson@linaro.org>
      Cc: Jason Wessel <jason.wessel@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      34aaff40
    • Petr Mladek's avatar
      kdb: properly synchronize vkdb_printf() calls with other CPUs · d5d8d3d0
      Petr Mladek authored
      kdb_printf_lock does not prevent other CPUs from entering the critical
      section because it is ignored when KDB_STATE_PRINTF_LOCK is set.
      
      The problematic situation might look like:
      
      CPU0					CPU1
      
      vkdb_printf()
        if (!KDB_STATE(PRINTF_LOCK))
          KDB_STATE_SET(PRINTF_LOCK);
          spin_lock_irqsave(&kdb_printf_lock, flags);
      
      					vkdb_printf()
      					  if (!KDB_STATE(PRINTF_LOCK))
      
      BANG: The PRINTF_LOCK state is set and CPU1 is entering the critical
      section without spinning on the lock.
      
      The problem is that the code tries to implement locking using two state
      variables that are not handled atomically.  Well, we need a custom
      locking because we want to allow reentering the critical section on the
      very same CPU.
      
      Let's use solution from Petr Zijlstra that was proposed for a similar
      scenario, see
      https://lkml.kernel.org/r/20161018171513.734367391@infradead.org
      
      This patch uses the same trick with cmpxchg().  The only difference is
      that we want to handle only recursion from the same context and
      therefore we disable interrupts.
      
      In addition, KDB_STATE_PRINTF_LOCK is removed.  In fact, we are not able
      to set it a non-racy way.
      
      Link: http://lkml.kernel.org/r/1480412276-16690-3-git-send-email-pmladek@suse.comSigned-off-by: default avatarPetr Mladek <pmladek@suse.com>
      Reviewed-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      Cc: Jason Wessel <jason.wessel@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d5d8d3d0
    • Petr Mladek's avatar
      kdb: remove unused kdb_event handling · d1bd8ead
      Petr Mladek authored
      kdb_event state variable is only set but never checked in the kernel
      code.
      
      http://www.spinics.net/lists/kdb/msg01733.html suggests that this
      variable affected WARN_CONSOLE_UNLOCKED() in the original
      implementation.  But this check never went upstream.
      
      The semantic is unclear and racy.  The value is updated after the
      kdb_printf_lock is acquired and after it is released.  It should be
      symmetric at minimum.  The value should be manipulated either inside or
      outside the locked area.
      
      Fortunately, it seems that the original function is gone and we could
      simply remove the state variable.
      
      Link: http://lkml.kernel.org/r/1480412276-16690-2-git-send-email-pmladek@suse.comSigned-off-by: default avatarPetr Mladek <pmladek@suse.com>
      Suggested-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      Cc: Jason Wessel <jason.wessel@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d1bd8ead
    • Douglas Anderson's avatar
      kernel/debug/debug_core.c: more properly delay for secondary CPUs · 2d13bb64
      Douglas Anderson authored
      We've got a delay loop waiting for secondary CPUs.  That loop uses
      loops_per_jiffy.  However, loops_per_jiffy doesn't actually mean how
      many tight loops make up a jiffy on all architectures.  It is quite
      common to see things like this in the boot log:
      
        Calibrating delay loop (skipped), value calculated using timer
        frequency.. 48.00 BogoMIPS (lpj=24000)
      
      In my case I was seeing lots of cases where other CPUs timed out
      entering the debugger only to print their stack crawls shortly after the
      kdb> prompt was written.
      
      Elsewhere in kgdb we already use udelay(), so that should be safe enough
      to use to implement our timeout.  We'll delay 1 ms for 1000 times, which
      should give us a full second of delay (just like the old code wanted)
      but allow us to notice that we're done every 1 ms.
      
      [akpm@linux-foundation.org: simplifications, per Daniel]
      Link: http://lkml.kernel.org/r/1477091361-2039-1-git-send-email-dianders@chromium.orgSigned-off-by: default avatarDouglas Anderson <dianders@chromium.org>
      Reviewed-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      Cc: Jason Wessel <jason.wessel@windriver.com>
      Cc: Brian Norris <briannorris@chromium.org>
      Cc: <stable@vger.kernel.org>	[4.0+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2d13bb64
  12. 13 Dec, 2016 1 commit
  13. 22 Feb, 2016 1 commit
    • Kees Cook's avatar
      mm/init: Add 'rodata=off' boot cmdline parameter to disable read-only kernel mappings · d2aa1aca
      Kees Cook authored
      It may be useful to debug writes to the readonly sections of memory,
      so provide a cmdline "rodata=off" to allow for this. This can be
      expanded in the future to support "log" and "write" modes, but that
      will need to be architecture-specific.
      
      This also makes KDB software breakpoints more usable, as read-only
      mappings can now be disabled on any kernel.
      Suggested-by: default avatarH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David Brown <david.brown@linaro.org>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: Emese Revfy <re.emese@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mathias Krause <minipli@googlemail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: PaX Team <pageexec@freemail.hu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: kernel-hardening@lists.openwall.com
      Cc: linux-arch <linux-arch@vger.kernel.org>
      Link: http://lkml.kernel.org/r/1455748879-21872-3-git-send-email-keescook@chromium.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      d2aa1aca
  14. 04 Dec, 2015 1 commit
  15. 19 Feb, 2015 5 commits
    • Colin Cross's avatar
      debug: prevent entering debug mode on panic/exception. · 5516fd7b
      Colin Cross authored
      On non-developer devices, kgdb prevents the device from rebooting
      after a panic.
      
      Incase of panics and exceptions, to allow the device to reboot, prevent
      entering debug mode to avoid getting stuck waiting for the user to
      interact with debugger.
      
      To avoid entering the debugger on panic/exception without any extra
      configuration, panic_timeout is being used which can be set via
      /proc/sys/kernel/panic at run time and CONFIG_PANIC_TIMEOUT sets the
      default value.
      
      Setting panic_timeout indicates that the user requested machine to
      perform unattended reboot after panic. We dont want to get stuck waiting
      for the user input incase of panic.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: kgdb-bugreport@lists.sourceforge.net
      Cc: linux-kernel@vger.kernel.org
      Cc: Android Kernel Team <kernel-team@android.com>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Sumit Semwal <sumit.semwal@linaro.org>
      Signed-off-by: default avatarColin Cross <ccross@android.com>
      [Kiran: Added context to commit message.
      panic_timeout is used instead of break_on_panic and
      break_on_exception to honor CONFIG_PANIC_TIMEOUT
      Modified the commit as per community feedback]
      Signed-off-by: default avatarKiran Raparthy <kiran.kumar@linaro.org>
      Signed-off-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: default avatarJason Wessel <jason.wessel@windriver.com>
      5516fd7b
    • Daniel Thompson's avatar
      kdb: Const qualifier for kdb_getstr's prompt argument · 32d375f6
      Daniel Thompson authored
      All current callers of kdb_getstr() can pass constant pointers via the
      prompt argument. This patch adds a const qualification to make explicit
      the fact that this is safe.
      Signed-off-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: default avatarJason Wessel <jason.wessel@windriver.com>
      32d375f6
    • Daniel Thompson's avatar
      kdb: Provide forward search at more prompt · fb6daa75
      Daniel Thompson authored
      Currently kdb allows the output of comamnds to be filtered using the
      | grep feature. This is useful but does not permit the output emitted
      shortly after a string match to be examined without wading through the
      entire unfiltered output of the command. Such a feature is particularly
      useful to navigate function traces because these traces often have a
      useful trigger string *before* the point of interest.
      
      This patch reuses the existing filtering logic to introduce a simple
      forward search to kdb that can be triggered from the more prompt.
      Signed-off-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: default avatarJason Wessel <jason.wessel@windriver.com>
      fb6daa75
    • Daniel Thompson's avatar
      kdb: Fix a prompt management bug when using | grep · ab08e464
      Daniel Thompson authored
      Currently when the "| grep" feature is used to filter the output of a
      command then the prompt is not displayed for the subsequent command.
      Likewise any characters typed by the user are also not echoed to the
      display. This rather disconcerting problem eventually corrects itself
      when the user presses Enter and the kdb_grepping_flag is cleared as
      kdb_parse() tries to make sense of whatever they typed.
      
      This patch resolves the problem by moving the clearing of this flag
      from the middle of command processing to the beginning.
      Signed-off-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: default avatarJason Wessel <jason.wessel@windriver.com>
      ab08e464
    • Daniel Thompson's avatar
      kdb: Remove stack dump when entering kgdb due to NMI · 54543881
      Daniel Thompson authored
      Issuing a stack dump feels ergonomically wrong when entering due to NMI.
      
      Entering due to NMI is normally a reaction to a user request, either the
      NMI button on a server or a "magic knock" on a UART. Therefore the
      backtrace behaviour on entry due to NMI should be like SysRq-g (no stack
      dump) rather than like oops.
      
      Note also that the stack dump does not offer any information that
      cannot be trivial retrieved using the 'bt' command.
      Signed-off-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: default avatarJason Wessel <jason.wessel@windriver.com>
      54543881