1. 03 4月, 2016 1 次提交
  2. 30 3月, 2016 2 次提交
    • J
      MIPS: cpu_name_string: Use raw_smp_processor_id(). · e95008a1
      James Hogan 提交于
      If cpu_name_string() is used in non-atomic context when preemption is
      enabled, it can trigger a BUG such as this one:
      
      BUG: using smp_processor_id() in preemptible [00000000] code: unaligned/156
      caller is __show_regs+0x1e4/0x330
      CPU: 2 PID: 156 Comm: unaligned Tainted: G        W       4.3.0-00366-ga3592179816d-dirty #1501
      Stack : ffffffff80900000 ffffffff8019bc18 000000000000005f ffffffff80a20000
               0000000000000000 0000000000000009 ffffffff8019c0e0 ffffffff80835648
               a8000000ff2bdec0 ffffffff80a1e628 000000000000009c 0000000000000002
               ffffffff80840000 a8000000fff2ffb0 0000000000000020 ffffffff8020e43c
               a8000000fff2fcf8 ffffffff80a20000 0000000000000000 ffffffff808f2607
               ffffffff8082b138 ffffffff8019cd1c 0000000000000030 ffffffff8082b138
               0000000000000002 000000000000009c 0000000000000000 0000000000000000
               0000000000000000 a8000000fff2fc40 0000000000000000 ffffffff8044dbf4
               0000000000000000 0000000000000000 0000000000000000 ffffffff8010c400
               ffffffff80855bb0 ffffffff8010d008 0000000000000000 ffffffff8044dbf4
               ...
      Call Trace:
      [<ffffffff8010d008>] show_stack+0x90/0xb0
      [<ffffffff8044dbf4>] dump_stack+0x84/0xe0
      [<ffffffff8046d4ec>] check_preemption_disabled+0x10c/0x110
      [<ffffffff8010c40c>] __show_regs+0x1e4/0x330
      [<ffffffff8010d060>] show_registers+0x28/0xc0
      [<ffffffff80110748>] do_ade+0xcc8/0xce0
      [<ffffffff80105b84>] resume_userspace_check+0x0/0x10
      
      This is possible because cpu_name_string() is used by __show_regs(),
      which is used by both show_regs() and show_registers(). These two
      functions are used by various exception handling functions, only some of
      which ensure that interrupts or preemption is disabled.
      
      However the following have interrupts explicitly enabled or not
      explicitly disabled:
      - do_reserved() (irqs enabled)
      - do_ade() (irqs not disabled)
      
      This can be hit by setting /sys/kernel/debug/mips/unaligned_action to 2,
      and triggering an address error exception, e.g. an unaligned access or
      access to kernel segment from user mode.
      
      To fix the above cases, use raw_smp_processor_id() instead. It is
      unusual for CPU names to be different in the same system, and even if
      they were, its possible the process has migrated between the exception
      of interest and the cpu_name_string() call anyway.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/12212/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      e95008a1
    • M
      pcmcia: db1xxx_ss: fix last irq_to_gpio user · e34b6fcf
      Manuel Lauss 提交于
      remove the usage of removed irq_to_gpio() function.  On pre-DB1200
      boards, pass the actual carddetect GPIO number instead of the IRQ,
      because we need the gpio to actually test card status (inserted or
      not) and can get the irq number with gpio_to_irq() instead.
      
      Tested on DB1300 and DB1500, this patch fixes PCMCIA on the DB1500,
      which used irq_to_gpio().
      
      Fixes: 832f5dac ("MIPS: Remove all the uses of custom gpio.h")
      Signed-off-by: NManuel Lauss <manuel.lauss@gmail.com>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Reviewed-by: NLinus Walleij <linus.walleij@linaro.org>
      Cc: linux-pcmcia@lists.infradead.org
      Cc: Linux-MIPS <linux-mips@linux-mips.org>
      Cc: stable@vger.kernel.org	# v4.3+
      Patchwork: https://patchwork.linux-mips.org/patch/12747/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      e34b6fcf
  3. 29 3月, 2016 2 次提交
    • P
      MIPS: Fix MSA ld unaligned failure cases · fa8ff601
      Paul Burton 提交于
      Copying the content of an MSA vector from user memory may involve TLB
      faults & mapping in pages. This will fail when preemption is disabled
      due to an inability to acquire mmap_sem from do_page_fault, which meant
      such vector loads to unmapped pages would always fail to be emulated.
      Fix this by disabling preemption later only around the updating of
      vector register state.
      
      This change does however introduce a race between performing the load
      into thread context & the thread being preempted, saving its current
      live context & clobbering the loaded value. This should be a rare
      occureence, so optimise for the fast path by simply repeating the load if
      we are preempted.
      
      Additionally if the copy failed then the failure path was taken with
      preemption left disabled, leading to the kernel typically encountering
      further issues around sleeping whilst atomic. The change to where
      preemption is disabled avoids this issue.
      
      Fixes: e4aa1f15 "MIPS: MSA unaligned memory access support"
      Reported-by: NJames Hogan <james.hogan@imgtec.com>
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Reviewed-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
      Cc: Maciej W. Rozycki <macro@linux-mips.org>
      Cc: James Cowgill <James.Cowgill@imgtec.com>
      Cc: Markos Chandras <markos.chandras@imgtec.com>
      Cc: stable <stable@vger.kernel.org> # v4.3
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/12345/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      fa8ff601
    • Q
      MIPS: Fix broken malta qemu · 19fb5818
      Qais Yousef 提交于
      Malta defconfig compiles with GIC on. Hence when compiling for SMP it causes
      the new IPI code to be activated. But on qemu malta there's no GIC causing a
      BUG_ON(!ipidomain) to be hit in mips_smp_ipi_init().
      
      Since in that configuration one can only run a single core SMP (!), skip IPI
      initialisation if we detect that this is the case. It is a sensible
      behaviour to introduce and should keep such possible configuration to run
      rather than die hard unnecessarily.
      Signed-off-by: NQais Yousef <qsyousef@gmail.com>
      Reported-by: NGuenter Roeck <linux@roeck-us.net>
      Tested-by: NGuenter Roeck <linux@roeck-us.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/12892/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      19fb5818
  4. 26 3月, 2016 3 次提交
    • A
      mm, kasan: stackdepot implementation. Enable stackdepot for SLAB · cd11016e
      Alexander Potapenko 提交于
      Implement the stack depot and provide CONFIG_STACKDEPOT.  Stack depot
      will allow KASAN store allocation/deallocation stack traces for memory
      chunks.  The stack traces are stored in a hash table and referenced by
      handles which reside in the kasan_alloc_meta and kasan_free_meta
      structures in the allocated memory chunks.
      
      IRQ stack traces are cut below the IRQ entry point to avoid unnecessary
      duplication.
      
      Right now stackdepot support is only enabled in SLAB allocator.  Once
      KASAN features in SLAB are on par with those in SLUB we can switch SLUB
      to stackdepot as well, thus removing the dependency on SLUB stack
      bookkeeping, which wastes a lot of memory.
      
      This patch is based on the "mm: kasan: stack depots" patch originally
      prepared by Dmitry Chernenkov.
      
      Joonsoo has said that he plans to reuse the stackdepot code for the
      mm/page_owner.c debugging facility.
      
      [akpm@linux-foundation.org: s/depot_stack_handle/depot_stack_handle_t]
      [aryabinin@virtuozzo.com: comment style fixes]
      Signed-off-by: NAlexander Potapenko <glider@google.com>
      Signed-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andrey Konovalov <adech.fo@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cd11016e
    • A
      arch, ftrace: for KASAN put hard/soft IRQ entries into separate sections · be7635e7
      Alexander Potapenko 提交于
      KASAN needs to know whether the allocation happens in an IRQ handler.
      This lets us strip everything below the IRQ entry point to reduce the
      number of unique stack traces needed to be stored.
      
      Move the definition of __irq_entry to <linux/interrupt.h> so that the
      users don't need to pull in <linux/ftrace.h>.  Also introduce the
      __softirq_entry macro which is similar to __irq_entry, but puts the
      corresponding functions to the .softirqentry.text section.
      Signed-off-by: NAlexander Potapenko <glider@google.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andrey Konovalov <adech.fo@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      be7635e7
    • T
      [IA64] Enable preadv2 and pwritev2 syscalls for ia64 · 2d5ae5c2
      Tony Luck 提交于
      New system calls added in:
            f17d8b35
            vfs: vfs: Define new syscalls preadv2,pwritev2
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      2d5ae5c2
  5. 25 3月, 2016 5 次提交
    • Y
      h8300: switch EARLYCON · 8cad4892
      Yoshinori Sato 提交于
      earlyprintk is architecture specific option.
      earlycon is generic and small footprint.
      Signed-off-by: NYoshinori Sato <ysato@users.sourceforge.jp>
      8cad4892
    • G
      h8300: dts: Rename the serial port clock to fck · d8581616
      Geert Uytterhoeven 提交于
      The clock is really the device functional clock, not the interface
      clock. Rename it.
      Signed-off-by: NGeert Uytterhoeven <geert+renesas@glider.be>
      Acked-by: NLaurent Pinchart <laurent.pinchart@ideasonboard.com>
      d8581616
    • M
      arm64: mm: allow preemption in copy_to_user_page · 691b1e2e
      Mark Rutland 提交于
      Currently we disable preemption in copy_to_user_page; a behaviour that
      we inherited from the 32-bit arm code. This was necessary for older
      cores without broadcast data cache maintenance, and ensured that cache
      lines were dirtied and cleaned by the same CPU. On these systems dirty
      cache line migration was not possible, so this was sufficient to
      guarantee coherency.
      
      On contemporary systems, cache coherence protocols permit (dirty) cache
      lines to migrate between CPUs as a result of speculation, prefetching,
      and other behaviours. To account for this, in ARMv8 data cache
      maintenance operations are broadcast and affect all data caches in the
      domain associated with the VA (i.e. ISH for kernel and user mappings).
      
      In __switch_to we ensure that tasks can be safely migrated in the middle
      of a maintenance sequence, using a dsb(ish) to ensure prior explicit
      memory accesses are observed and cache maintenance operations are
      completed before a task can be run on another CPU.
      
      Given the above, it is not necessary to disable preemption in
      copy_to_user_page. This patch removes the preempt_{disable,enable}
      calls, permitting preemption.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      691b1e2e
    • M
      arm64: consistently use p?d_set_huge · c661cb1c
      Mark Rutland 提交于
      Commit 324420bf ("arm64: add support for ioremap() block
      mappings") added new p?d_set_huge functions which do the hard work to
      generate and set a correct block entry.
      
      These differ from open-coded huge page creation in the early page table
      code by explicitly setting the P?D_TYPE_SECT bits (which are implicitly
      retained by mk_sect_prot() for any valid prot), but are otherwise
      identical (and cannot fail on arm64).
      
      For simplicity and consistency, make use of these in the initial page
      table creation code.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c661cb1c
    • A
      arm64: kaslr: use callee saved register to preserve SCTLR across C call · d5e57437
      Ard Biesheuvel 提交于
      The KASLR code incorrectly expects the contents of x18 to be preserved
      across a call into C code, and uses it to stash the contents of SCTLR_EL1
      before enabling the MMU. If the MMU needs to be disabled again to create
      the randomized kernel mapping, x18 is written back to SCTLR_EL1, which is
      likely to crash the system if x18 has been clobbered by kasan_early_init()
      or kaslr_early_init(). So use x22 instead, which is not in use so far in
      head.S
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      d5e57437
  6. 23 3月, 2016 18 次提交
  7. 22 3月, 2016 9 次提交