1. 29 Aug, 2019 2 commits
  2. 25 Aug, 2019 2 commits
  3. 06 Aug, 2019 2 commits
    • Josh Poimboeuf's avatar
      Documentation: Add swapgs description to the Spectre v1 documentation · 726d427e
      Josh Poimboeuf authored
      commit 4c920576 upstream
      
      Add documentation to the Spectre document about the new swapgs variant of
      Spectre v1.
      Signed-off-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      726d427e
    • Josh Poimboeuf's avatar
      x86/speculation: Enable Spectre v1 swapgs mitigations · 405d06fb
      Josh Poimboeuf authored
      commit a2059825 upstream
      
      The previous commit added macro calls in the entry code which mitigate the
      Spectre v1 swapgs issue if the X86_FEATURE_FENCE_SWAPGS_* features are
      enabled.  Enable those features where applicable.
      
      The mitigations may be disabled with "nospectre_v1" or "mitigations=off".
      
      There are different features which can affect the risk of attack:
      
      - When FSGSBASE is enabled, unprivileged users are able to place any
        value in GS, using the wrgsbase instruction.  This means they can
        write a GS value which points to any value in kernel space, which can
        be useful with the following gadget in an interrupt/exception/NMI
        handler:
      
      	if (coming from user space)
      		swapgs
      	mov %gs:<percpu_offset>, %reg1
      	// dependent load or store based on the value of %reg
      	// for example: mov %(reg1), %reg2
      
        If an interrupt is coming from user space, and the entry code
        speculatively skips the swapgs (due to user branch mistraining), it
        may speculatively execute the GS-based load and a subsequent dependent
        load or store, exposing the kernel data to an L1 side channel leak.
      
        Note that, on Intel, a similar attack exists in the above gadget when
        coming from kernel space, if the swapgs gets speculatively executed to
        switch back to the user GS.  On AMD, this variant isn't possible
        because swapgs is serializing with respect to future GS-based
        accesses.
      
        NOTE: The FSGSBASE patch set hasn't been merged yet, so the above case
      	doesn't exist quite yet.
      
      - When FSGSBASE is disabled, the issue is mitigated somewhat because
        unprivileged users must use prctl(ARCH_SET_GS) to set GS, which
        restricts GS values to user space addresses only.  That means the
        gadget would need an additional step, since the target kernel address
        needs to be read from user space first.  Something like:
      
      	if (coming from user space)
      		swapgs
      	mov %gs:<percpu_offset>, %reg1
      	mov (%reg1), %reg2
      	// dependent load or store based on the value of %reg2
      	// for example: mov %(reg2), %reg3
      
        It's difficult to audit for this gadget in all the handlers, so while
        there are no known instances of it, it's entirely possible that it
        exists somewhere (or could be introduced in the future).  Without
        tooling to analyze all such code paths, consider it vulnerable.
      
        Effects of SMAP on the !FSGSBASE case:
      
        - If SMAP is enabled, and the CPU reports RDCL_NO (i.e., not
          susceptible to Meltdown), the kernel is prevented from speculatively
          reading user space memory, even L1 cached values.  This effectively
          disables the !FSGSBASE attack vector.
      
        - If SMAP is enabled, but the CPU *is* susceptible to Meltdown, SMAP
          still prevents the kernel from speculatively reading user space
          memory.  But it does *not* prevent the kernel from reading the
          user value from L1, if it has already been cached.  This is probably
          only a small hurdle for an attacker to overcome.
      
      Thanks to Dave Hansen for contributing the speculative_smap() function.
      
      Thanks to Andrew Cooper for providing the inside scoop on whether swapgs
      is serializing on AMD.
      
      [ tglx: Fixed the USER fence decision and polished the comment as suggested
        	by Dave Hansen ]
      Signed-off-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarDave Hansen <dave.hansen@intel.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      405d06fb
  4. 31 Jul, 2019 3 commits
  5. 26 Jul, 2019 3 commits
  6. 24 Jul, 2019 1 commit
  7. 14 Jul, 2019 2 commits
  8. 08 Jul, 2019 3 commits
  9. 07 Jul, 2019 3 commits
  10. 26 Jun, 2019 1 commit
  11. 19 Jun, 2019 4 commits
    • Yang Yingliang's avatar
      doc: fix documentation about UIO_MEM_LOGICAL using · 6ad805b8
      Yang Yingliang authored
      After commit d4fc5069 ("mm: switch s_mem and slab_cache in struct page")
      page->mapping will be re-used by slab allocations and page->mapping->host
      will be used in balance_dirty_pages_ratelimited() as an inode member but
      it's not an inode in fact and leads an oops.
      
      [  159.906493] Unable to handle kernel paging request at virtual address ffff200012d90be8
      [  159.908029] Mem abort info:
      [  159.908552]   ESR = 0x96000007
      [  159.909138]   Exception class = DABT (current EL), IL = 32 bits
      [  159.910155]   SET = 0, FnV = 0
      [  159.910690]   EA = 0, S1PTW = 0
      [  159.911241] Data abort info:
      [  159.911846]   ISV = 0, ISS = 0x00000007
      [  159.912567]   CM = 0, WnR = 0
      [  159.913105] swapper pgtable: 4k pages, 48-bit VAs, pgdp=0000000042acd000
      [  159.914269] [ffff200012d90be8] pgd=000000043ffff003, pud=000000043fffe003, pmd=000000043fffa003, pte=0000000000000000
      [  159.916280] Internal error: Oops: 96000007 [#1] SMP
      [  159.917195] Dumping ftrace buffer:
      [  159.917845]    (ftrace buffer empty)
      [  159.918521] Modules linked in: uio_dev(OE)
      [  159.919276] CPU: 1 PID: 295 Comm: uio_test Tainted: G           OE     5.2.0-rc4+ #46
      [  159.920859] Hardware name: linux,dummy-virt (DT)
      [  159.921815] pstate: 60000005 (nZCv daif -PAN -UAO)
      [  159.922809] pc : balance_dirty_pages_ratelimited+0x68/0xc38
      [  159.923965] lr : fault_dirty_shared_page.isra.8+0xe4/0x100
      [  159.925134] sp : ffff800368a77ae0
      [  159.925824] x29: ffff800368a77ae0 x28: 1ffff0006d14ce1a
      [  159.926906] x27: ffff800368a670d0 x26: ffff800368a67120
      [  159.927985] x25: 1ffff0006d10f5fe x24: ffff200012d90be8
      [  159.929089] x23: ffff200013732000 x22: ffff80036ec03200
      [  159.930172] x21: ffff200012d90bc0 x20: 1fffe400025b217d
      [  159.931253] x19: ffff80036ec03200 x18: 0000000000000000
      [  159.932348] x17: 0000000000000000 x16: 0ffffe0000010208
      [  159.933439] x15: 0000000000000000 x14: 0000000000000000
      [  159.934518] x13: 0000000000000000 x12: 0000000000000000
      [  159.935596] x11: 1fffefc001b452c0 x10: ffff0fc001b452c0
      [  159.936697] x9 : dfff200000000000 x8 : dfff200000000001
      [  159.937781] x7 : ffff7e000da29607 x6 : ffff0fc001b452c1
      [  159.938859] x5 : ffff0fc001b452c1 x4 : ffff0fc001b452c1
      [  159.939944] x3 : ffff200010523ad4 x2 : 1fffe400026e659b
      [  159.941065] x1 : dfff200000000000 x0 : ffff200013732cd8
      [  159.942205] Call trace:
      [  159.942732]  balance_dirty_pages_ratelimited+0x68/0xc38
      [  159.943797]  fault_dirty_shared_page.isra.8+0xe4/0x100
      [  159.944867]  do_fault+0x608/0x1250
      [  159.945571]  __handle_mm_fault+0x93c/0xfb8
      [  159.946412]  handle_mm_fault+0x1c0/0x360
      [  159.947224]  do_page_fault+0x358/0x8d0
      [  159.947993]  do_translation_fault+0xf8/0x124
      [  159.948884]  do_mem_abort+0x70/0x190
      [  159.949624]  el0_da+0x24/0x28
      
      According another commit 5e901d0b ("scsi: qedi: Fix bad pte call trace
      when iscsiuio is stopped."), using kmalloc also cause other problem.
      
      But the documentation about UIO_MEM_LOGICAL allows using kmalloc(), remove
      and don't allow using kmalloc() in documentation.
      Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6ad805b8
    • Gavin Schenk's avatar
      MAINTAINERS / Documentation: Thorsten Scherer is the successor of Gavin Schenk · 75d7627f
      Gavin Schenk authored
      Due to new challenges in my life I can no longer take care of SIOX.
      Thorsten takes over my SIOX tasks.
      Signed-off-by: default avatarGavin Schenk <g.schenk@eckelmann.de>
      Acked-by: default avatarThorsten Scherer <t.scherer@eckelmann.de>
      Signed-off-by: default avatarUwe Kleine-König <u.kleine-koenig@pengutronix.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      75d7627f
    • Takashi Iwai's avatar
      docs: fb: Add TER16x32 to the available font names · fce677d7
      Takashi Iwai authored
      The new font is available since recently.
      Signed-off-by: default avatarTakashi Iwai <tiwai@suse.de>
      Acked-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      fce677d7
    • Liran Alon's avatar
      KVM: x86: Modify struct kvm_nested_state to have explicit fields for data · 6ca00dfa
      Liran Alon authored
      Improve the KVM_{GET,SET}_NESTED_STATE structs by detailing the format
      of VMX nested state data in a struct.
      
      In order to avoid changing the ioctl values of
      KVM_{GET,SET}_NESTED_STATE, there is a need to preserve
      sizeof(struct kvm_nested_state). This is done by defining the data
      struct as "data.vmx[0]". It was the most elegant way I found to
      preserve struct size while still keeping struct readable and easy to
      maintain. It does have a misfortunate side-effect that now it has to be
      accessed as "data.vmx[0]" rather than just "data.vmx".
      
      Because we are already modifying these structs, I also modified the
      following:
      * Define the "format" field values as macros.
      * Rename vmcs_pa to vmcs12_pa for better readability.
      Signed-off-by: default avatarLiran Alon <liran.alon@oracle.com>
      [Remove SVM stubs, add KVM_STATE_NESTED_VMX_VMCS12_SIZE. - Paolo]
      Reviewed-by: default avatarLiran Alon <liran.alon@oracle.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      6ca00dfa
  12. 18 Jun, 2019 1 commit
  13. 17 Jun, 2019 2 commits
  14. 16 Jun, 2019 1 commit
    • Eric Dumazet's avatar
      tcp: add tcp_min_snd_mss sysctl · 5f3e2bf0
      Eric Dumazet authored
      Some TCP peers announce a very small MSS option in their SYN and/or
      SYN/ACK messages.
      
      This forces the stack to send packets with a very high network/cpu
      overhead.
      
      Linux has enforced a minimal value of 48. Since this value includes
      the size of TCP options, and that the options can consume up to 40
      bytes, this means that each segment can include only 8 bytes of payload.
      
      In some cases, it can be useful to increase the minimal value
      to a saner value.
      
      We still let the default to 48 (TCP_MIN_SND_MSS), for compatibility
      reasons.
      
      Note that TCP_MAXSEG socket option enforces a minimal value
      of (TCP_MIN_MSS). David Miller increased this minimal value
      in commit c39508d6 ("tcp: Make TCP_MAXSEG minimum more correct.")
      from 64 to 88.
      
      We might in the future merge TCP_MIN_SND_MSS and TCP_MIN_MSS.
      
      CVE-2019-11479 -- tcp mss hardcoded to 48
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Suggested-by: default avatarJonathan Looney <jtl@netflix.com>
      Acked-by: default avatarNeal Cardwell <ncardwell@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Cc: Tyler Hicks <tyhicks@canonical.com>
      Cc: Bruce Curtis <brucec@netflix.com>
      Cc: Jonathan Lemon <jonathan.lemon@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      5f3e2bf0
  15. 15 Jun, 2019 2 commits
  16. 13 Jun, 2019 3 commits
    • Dave Martin's avatar
      arm64/sve: Fix missing SVE/FPSIMD endianness conversions · 41040cf7
      Dave Martin authored
      The in-memory representation of SVE and FPSIMD registers is
      different: the FPSIMD V-registers are stored as single 128-bit
      host-endian values, whereas SVE registers are stored in an
      endianness-invariant byte order.
      
      This means that the two representations differ when running on a
      big-endian host.  But we blindly copy data from one representation
      to another when converting between the two, resulting in the
      register contents being unintentionally byteswapped in certain
      situations.  Currently this can be triggered by the first SVE
      instruction after a syscall, for example (though the potential
      trigger points may vary in future).
      
      So, fix the conversion functions fpsimd_to_sve(), sve_to_fpsimd()
      and sve_sync_from_fpsimd_zeropad() to swab where appropriate.
      
      There is no common swahl128() or swab128() that we could use here.
      Maybe it would be worth making this generic, but for now add a
      simple local hack.
      
      Since the byte order differences are exposed in ABI, also clarify
      the documentation.
      
      Cc: Alex Bennée <alex.bennee@linaro.org>
      Cc: Peter Maydell <peter.maydell@linaro.org>
      Cc: Alan Hayward <alan.hayward@arm.com>
      Cc: Julien Grall <julien.grall@arm.com>
      Fixes: bc0ee476 ("arm64/sve: Core task context handling")
      Fixes: 8cd969d2 ("arm64/sve: Signal handling support")
      Fixes: 43d4da2c ("arm64/sve: ptrace and ELF coredump support")
      Signed-off-by: default avatarDave Martin <Dave.Martin@arm.com>
      [will: Fix typos in comments and docs spotted by Julien]
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      41040cf7
    • Andreas Herrmann's avatar
      blkio-controller.txt: Remove references to CFQ · fb5772cb
      Andreas Herrmann authored
      CFQ is gone. No need anymore to document its "proportional weight time
      based division of disk policy".
      Signed-off-by: default avatarAndreas Herrmann <aherrmann@suse.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      fb5772cb
    • Andreas Herrmann's avatar
      block/switching-sched.txt: Update to blk-mq schedulers · 8614b008
      Andreas Herrmann authored
      Remove references to CFQ and legacy block layer which are gone.
      Update example with what's available under blk-mq.
      Signed-off-by: default avatarAndreas Herrmann <aherrmann@suse.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      8614b008
  17. 12 Jun, 2019 1 commit
  18. 07 Jun, 2019 1 commit
  19. 01 Jun, 2019 2 commits
  20. 31 May, 2019 1 commit