1. 04 4月, 2016 2 次提交
  2. 03 4月, 2016 13 次提交
  3. 30 3月, 2016 2 次提交
    • J
      MIPS: cpu_name_string: Use raw_smp_processor_id(). · e95008a1
      James Hogan 提交于
      If cpu_name_string() is used in non-atomic context when preemption is
      enabled, it can trigger a BUG such as this one:
      
      BUG: using smp_processor_id() in preemptible [00000000] code: unaligned/156
      caller is __show_regs+0x1e4/0x330
      CPU: 2 PID: 156 Comm: unaligned Tainted: G        W       4.3.0-00366-ga3592179816d-dirty #1501
      Stack : ffffffff80900000 ffffffff8019bc18 000000000000005f ffffffff80a20000
               0000000000000000 0000000000000009 ffffffff8019c0e0 ffffffff80835648
               a8000000ff2bdec0 ffffffff80a1e628 000000000000009c 0000000000000002
               ffffffff80840000 a8000000fff2ffb0 0000000000000020 ffffffff8020e43c
               a8000000fff2fcf8 ffffffff80a20000 0000000000000000 ffffffff808f2607
               ffffffff8082b138 ffffffff8019cd1c 0000000000000030 ffffffff8082b138
               0000000000000002 000000000000009c 0000000000000000 0000000000000000
               0000000000000000 a8000000fff2fc40 0000000000000000 ffffffff8044dbf4
               0000000000000000 0000000000000000 0000000000000000 ffffffff8010c400
               ffffffff80855bb0 ffffffff8010d008 0000000000000000 ffffffff8044dbf4
               ...
      Call Trace:
      [<ffffffff8010d008>] show_stack+0x90/0xb0
      [<ffffffff8044dbf4>] dump_stack+0x84/0xe0
      [<ffffffff8046d4ec>] check_preemption_disabled+0x10c/0x110
      [<ffffffff8010c40c>] __show_regs+0x1e4/0x330
      [<ffffffff8010d060>] show_registers+0x28/0xc0
      [<ffffffff80110748>] do_ade+0xcc8/0xce0
      [<ffffffff80105b84>] resume_userspace_check+0x0/0x10
      
      This is possible because cpu_name_string() is used by __show_regs(),
      which is used by both show_regs() and show_registers(). These two
      functions are used by various exception handling functions, only some of
      which ensure that interrupts or preemption is disabled.
      
      However the following have interrupts explicitly enabled or not
      explicitly disabled:
      - do_reserved() (irqs enabled)
      - do_ade() (irqs not disabled)
      
      This can be hit by setting /sys/kernel/debug/mips/unaligned_action to 2,
      and triggering an address error exception, e.g. an unaligned access or
      access to kernel segment from user mode.
      
      To fix the above cases, use raw_smp_processor_id() instead. It is
      unusual for CPU names to be different in the same system, and even if
      they were, its possible the process has migrated between the exception
      of interest and the cpu_name_string() call anyway.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/12212/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      e95008a1
    • M
      pcmcia: db1xxx_ss: fix last irq_to_gpio user · e34b6fcf
      Manuel Lauss 提交于
      remove the usage of removed irq_to_gpio() function.  On pre-DB1200
      boards, pass the actual carddetect GPIO number instead of the IRQ,
      because we need the gpio to actually test card status (inserted or
      not) and can get the irq number with gpio_to_irq() instead.
      
      Tested on DB1300 and DB1500, this patch fixes PCMCIA on the DB1500,
      which used irq_to_gpio().
      
      Fixes: 832f5dac ("MIPS: Remove all the uses of custom gpio.h")
      Signed-off-by: NManuel Lauss <manuel.lauss@gmail.com>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Reviewed-by: NLinus Walleij <linus.walleij@linaro.org>
      Cc: linux-pcmcia@lists.infradead.org
      Cc: Linux-MIPS <linux-mips@linux-mips.org>
      Cc: stable@vger.kernel.org	# v4.3+
      Patchwork: https://patchwork.linux-mips.org/patch/12747/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      e34b6fcf
  4. 29 3月, 2016 2 次提交
    • P
      MIPS: Fix MSA ld unaligned failure cases · fa8ff601
      Paul Burton 提交于
      Copying the content of an MSA vector from user memory may involve TLB
      faults & mapping in pages. This will fail when preemption is disabled
      due to an inability to acquire mmap_sem from do_page_fault, which meant
      such vector loads to unmapped pages would always fail to be emulated.
      Fix this by disabling preemption later only around the updating of
      vector register state.
      
      This change does however introduce a race between performing the load
      into thread context & the thread being preempted, saving its current
      live context & clobbering the loaded value. This should be a rare
      occureence, so optimise for the fast path by simply repeating the load if
      we are preempted.
      
      Additionally if the copy failed then the failure path was taken with
      preemption left disabled, leading to the kernel typically encountering
      further issues around sleeping whilst atomic. The change to where
      preemption is disabled avoids this issue.
      
      Fixes: e4aa1f15 "MIPS: MSA unaligned memory access support"
      Reported-by: NJames Hogan <james.hogan@imgtec.com>
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Reviewed-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
      Cc: Maciej W. Rozycki <macro@linux-mips.org>
      Cc: James Cowgill <James.Cowgill@imgtec.com>
      Cc: Markos Chandras <markos.chandras@imgtec.com>
      Cc: stable <stable@vger.kernel.org> # v4.3
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/12345/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      fa8ff601
    • Q
      MIPS: Fix broken malta qemu · 19fb5818
      Qais Yousef 提交于
      Malta defconfig compiles with GIC on. Hence when compiling for SMP it causes
      the new IPI code to be activated. But on qemu malta there's no GIC causing a
      BUG_ON(!ipidomain) to be hit in mips_smp_ipi_init().
      
      Since in that configuration one can only run a single core SMP (!), skip IPI
      initialisation if we detect that this is the case. It is a sensible
      behaviour to introduce and should keep such possible configuration to run
      rather than die hard unnecessarily.
      Signed-off-by: NQais Yousef <qsyousef@gmail.com>
      Reported-by: NGuenter Roeck <linux@roeck-us.net>
      Tested-by: NGuenter Roeck <linux@roeck-us.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/12892/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      19fb5818
  5. 26 3月, 2016 1 次提交
  6. 18 3月, 2016 1 次提交
    • J
      mm: introduce page reference manipulation functions · fe896d18
      Joonsoo Kim 提交于
      The success of CMA allocation largely depends on the success of
      migration and key factor of it is page reference count.  Until now, page
      reference is manipulated by direct calling atomic functions so we cannot
      follow up who and where manipulate it.  Then, it is hard to find actual
      reason of CMA allocation failure.  CMA allocation should be guaranteed
      to succeed so finding offending place is really important.
      
      In this patch, call sites where page reference is manipulated are
      converted to introduced wrapper function.  This is preparation step to
      add tracepoint to each page reference manipulation function.  With this
      facility, we can easily find reason of CMA allocation failure.  There is
      no functional change in this patch.
      
      In addition, this patch also converts reference read sites.  It will
      help a second step that renames page._count to something else and
      prevents later attempt to direct access to it (Suggested by Andrew).
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: NMichal Nazarewicz <mina86@mina86.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fe896d18
  7. 14 3月, 2016 2 次提交
    • A
      ipv6: Pass proto to csum_ipv6_magic as __u8 instead of unsigned short · 1e940829
      Alexander Duyck 提交于
      This patch updates csum_ipv6_magic so that it correctly recognizes that
      protocol is a unsigned 8 bit value.
      
      This will allow us to better understand what limitations may or may not be
      present in how we handle the data.  For example there are a number of
      places that call htonl on the protocol value.  This is likely not necessary
      and can be replaced with a multiplication by ntohl(1) which will be
      converted to a shift by the compiler.
      Signed-off-by: NAlexander Duyck <aduyck@mirantis.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1e940829
    • A
      ipv4: Update parameters for csum_tcpudp_magic to their original types · 01cfbad7
      Alexander Duyck 提交于
      This patch updates all instances of csum_tcpudp_magic and
      csum_tcpudp_nofold to reflect the types that are usually used as the source
      inputs.  For example the protocol field is populated based on nexthdr which
      is actually an unsigned 8 bit value.  The length is usually populated based
      on skb->len which is an unsigned integer.
      
      This addresses an issue in which the IPv6 function csum_ipv6_magic was
      generating a checksum using the full 32b of skb->len while
      csum_tcpudp_magic was only using the lower 16 bits.  As a result we could
      run into issues when attempting to adjust the checksum as there was no
      protocol agnostic way to update it.
      
      With this change the value is still truncated as many architectures use
      "(len + proto) << 8", however this truncation only occurs for values
      greater than 16776960 in length and as such is unlikely to occur as we stop
      the inner headers at ~64K in size.
      
      I did have to make a few minor changes in the arm, mn10300, nios2, and
      score versions of the function in order to support these changes as they
      were either using things such as an OR to combine the protocol and length,
      or were using ntohs to convert the length which would have truncated the
      value.
      
      I also updated a few spots in terms of whitespace and type differences for
      the addresses.  Most of this was just to make sure all of the definitions
      were in sync going forward.
      Signed-off-by: NAlexander Duyck <aduyck@mirantis.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      01cfbad7
  8. 13 3月, 2016 3 次提交
  9. 12 3月, 2016 2 次提交
    • B
      MIPS: Loongson 3: Keep CPU physical (not virtual) addresses in shadow ROM resource · 97f47e73
      Bjorn Helgaas 提交于
      Loongson 3 used the IORESOURCE_ROM_COPY flag for its ROM resource.  There
      are two problems with this:
      
        - When IORESOURCE_ROM_COPY is set, pci_map_rom() assumes the resource
          contains virtual addresses, so it doesn't ioremap the resource.  This
          implies loongson_sysconf.vgabios_addr is a virtual address.  That's a
          problem because resources should contain CPU *physical* addresses not
          virtual addresses.
      
        - When IORESOURCE_ROM_COPY is set, pci_cleanup_rom() calls kfree() on the
          resource.  We did not kmalloc() the loongson_sysconf.vgabios_addr area,
          so it is incorrect to kfree() it.
      
      If we're using a shadow copy in RAM for the Loongson 3 VGA BIOS area,
      disable the ROM BAR and release the address space it was consuming.
      
      Use IORESOURCE_ROM_SHADOW instead of IORESOURCE_ROM_COPY.  This means the
      struct resource contains CPU physical addresses, and pci_map_rom() will
      ioremap() it as needed.
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      97f47e73
    • B
      MIPS: Loongson 3: Use temporary struct resource * to avoid repetition · 53f0a509
      Bjorn Helgaas 提交于
      Use a temporary struct resource pointer to avoid needless repetition of
      "pdev->resource[PCI_ROM_RESOURCE]".  No functional change intended.
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      53f0a509
  10. 09 3月, 2016 2 次提交
    • B
      PCI: Include pci/hotplug Kconfig directly from pci/Kconfig · e7e127e3
      Bjorn Helgaas 提交于
      Include pci/hotplug/Kconfig directly from pci/Kconfig, so arches don't
      have to source both pci/Kconfig and pci/hotplug/Kconfig.
      
      Note that this effectively adds pci/hotplug/Kconfig to the following
      arches, because they already sourced drivers/pci/Kconfig but they
      previously did not source drivers/pci/hotplug/Kconfig:
      
        alpha
        arm
        avr32
        frv
        m68k
        microblaze
        mn10300
        sparc
        unicore32
      
      Inspired-by-patch-from: Bogicevic Sasa <brutallesale@gmail.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      e7e127e3
    • B
      PCI: Include pci/pcie/Kconfig directly from pci/Kconfig · 5f8fc432
      Bogicevic Sasa 提交于
      Include pci/pcie/Kconfig directly from pci/Kconfig, so arches don't
      have to source both pci/Kconfig and pci/pcie/Kconfig.
      
      Note that this effectively adds pci/pcie/Kconfig to the following
      arches, because they already sourced drivers/pci/Kconfig but they
      previously did not source drivers/pci/pcie/Kconfig:
      
        alpha
        avr32
        blackfin
        frv
        m32r
        m68k
        microblaze
        mn10300
        parisc
        sparc
        unicore32
        xtensa
      
      [bhelgaas: changelog, source pci/pcie/Kconfig at top of pci/Kconfig, whitespace]
      Signed-off-by: NSasa Bogicevic <brutallesale@gmail.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      5f8fc432
  11. 08 3月, 2016 1 次提交
  12. 05 3月, 2016 2 次提交
    • D
      mm/pkeys: Fix siginfo ABI breakage caused by new u64 field · 49cd53bf
      Dave Hansen 提交于
      Stephen Rothwell reported this linux-next build failure:
      
      	http://lkml.kernel.org/r/20160226164406.065a1ffc@canb.auug.org.au
      
      ... caused by the Memory Protection Keys patches from the tip tree triggering
      a newly introduced build-time sanity check on an ARM build, because they changed
      the ABI of siginfo in an unexpected way.
      
      If u64 has a natural alignment of 8 bytes (which is the case on most mainstream
      platforms, with the notable exception of x86-32), then the leadup to the
      _sifields union matters:
      
      typedef struct siginfo {
              int si_signo;
              int si_errno;
              int si_code;
      
              union {
      	...
              } _sifields;
      } __ARCH_SI_ATTRIBUTES siginfo_t;
      
      Note how the first 3 fields give us 12 bytes, so _sifields is not 8
      naturally bytes aligned.
      
      Before the _pkey field addition the largest element of _sifields (on
      32-bit platforms) was 32 bits. With the u64 added, the minimum alignment
      requirement increased to 8 bytes on those (rare) 32-bit platforms. Thus
      GCC padded the space after si_code with 4 extra bytes, and shifted all
      _sifields offsets by 4 bytes - breaking the ABI of all of those
      remaining fields.
      
      On 64-bit platforms this problem was hidden due to _sifields already
      having numerous fields with natural 8 bytes alignment (pointers).
      
      To fix this, we replace the u64 with an '__u32'.  The __u32 does not
      increase the minimum alignment requirement of the union, and it is
      also large enough to store the 16-bit pkey we have today on x86.
      Reported-by: NStehen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Acked-by: NStehen Rothwell <sfr@canb.auug.org.au>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Dave Hansen <dave@sr71.net>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-next@vger.kernel.org
      Fixes: cd0ea35f ("signals, pkeys: Notify userspace about protection key faults")
      Link: http://lkml.kernel.org/r/20160301125451.02C7426D@viggo.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      49cd53bf
    • M
      MIPS: traps: Fix SIGFPE information leak from `do_ov' and `do_trap_or_bp' · e723e3f7
      Maciej W. Rozycki 提交于
      Avoid sending a partially initialised `siginfo_t' structure along SIGFPE
      signals issued from `do_ov' and `do_trap_or_bp', leading to information
      leaking from the kernel stack.
      Signed-off-by: NMaciej W. Rozycki <macro@imgtec.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      e723e3f7
  13. 02 3月, 2016 2 次提交
    • M
      mips/kvm: fix ioctl error handling · 0178fd7d
      Michael S. Tsirkin 提交于
      Returning directly whatever copy_to_user(...) or copy_from_user(...)
      returns may not do the right thing if there's a pagefault:
      copy_to_user/copy_from_user return the number of bytes not copied in
      this case, but ioctls need to return -EFAULT instead.
      
      Fix up kvm on mips to do
      	return copy_to_user(...)) ?  -EFAULT : 0;
      and
      	return copy_from_user(...)) ?  -EFAULT : 0;
      
      everywhere.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      0178fd7d
    • T
      arch/hotplug: Call into idle with a proper state · fc6d73d6
      Thomas Gleixner 提交于
      Let the non boot cpus call into idle with the corresponding hotplug state, so
      the hotplug core can handle the further bringup. That's a first step to
      convert the boot side of the hotplugged cpus to do all the synchronization
      with the other side through the state machine. For now it'll only start the
      hotplug thread and kick the full bringup of the cpu.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: linux-arch@vger.kernel.org
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: http://lkml.kernel.org/r/20160226182341.614102639@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      fc6d73d6
  14. 29 2月, 2016 4 次提交
  15. 28 2月, 2016 1 次提交
    • D
      mm: ASLR: use get_random_long() · 5ef11c35
      Daniel Cashman 提交于
      Replace calls to get_random_int() followed by a cast to (unsigned long)
      with calls to get_random_long().  Also address shifting bug which, in
      case of x86 removed entropy mask for mmap_rnd_bits values > 31 bits.
      Signed-off-by: NDaniel Cashman <dcashman@android.com>
      Acked-by: NKees Cook <keescook@chromium.org>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Nick Kralevich <nnk@google.com>
      Cc: Jeff Vander Stoep <jeffv@google.com>
      Cc: Mark Salyzyn <salyzyn@android.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5ef11c35