1. 20 5月, 2014 20 次提交
  2. 17 5月, 2014 1 次提交
    • M
      arm64: fix pud_huge() for 2-level pagetables · 4797ec2d
      Mark Salter 提交于
      The following happens when trying to run a kvm guest on a kernel
      configured for 64k pages. This doesn't happen with 4k pages:
      
        BUG: failure at include/linux/mm.h:297/put_page_testzero()!
        Kernel panic - not syncing: BUG!
        CPU: 2 PID: 4228 Comm: qemu-system-aar Tainted: GF            3.13.0-0.rc7.31.sa2.k32v1.aarch64.debug #1
        Call trace:
        [<fffffe0000096034>] dump_backtrace+0x0/0x16c
        [<fffffe00000961b4>] show_stack+0x14/0x1c
        [<fffffe000066e648>] dump_stack+0x84/0xb0
        [<fffffe0000668678>] panic+0xf4/0x220
        [<fffffe000018ec78>] free_reserved_area+0x0/0x110
        [<fffffe000018edd8>] free_pages+0x50/0x88
        [<fffffe00000a759c>] kvm_free_stage2_pgd+0x30/0x40
        [<fffffe00000a5354>] kvm_arch_destroy_vm+0x18/0x44
        [<fffffe00000a1854>] kvm_put_kvm+0xf0/0x184
        [<fffffe00000a1938>] kvm_vm_release+0x10/0x1c
        [<fffffe00001edc1c>] __fput+0xb0/0x288
        [<fffffe00001ede4c>] ____fput+0xc/0x14
        [<fffffe00000d5a2c>] task_work_run+0xa8/0x11c
        [<fffffe0000095c14>] do_notify_resume+0x54/0x58
      
      In arch/arm/kvm/mmu.c:unmap_range(), we end up doing an extra put_page()
      on the stage2 pgd which leads to the BUG in put_page_testzero(). This
      happens because a pud_huge() test in unmap_range() returns true when it
      should always be false with 2-level pages tables used by 64k pages.
      This patch removes support for huge puds if 2-level pagetables are
      being used.
      Signed-off-by: NMark Salter <msalter@redhat.com>
      [catalin.marinas@arm.com: removed #ifndef around PUD_SIZE check]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: <stable@vger.kernel.org> # v3.11+
      4797ec2d
  3. 16 5月, 2014 2 次提交
    • J
      parisc: Improve LWS-CAS performance · c776cd89
      John David Anglin 提交于
      The attached change significantly improves the performance of the LWS-CAS code
      in syscall.S.
      This allows a number of packages to build (e.g., zeromq3, gtest and libxs)
      that previously failed because slow LWS-CAS performance under contention. In
      particular, interrupts taken while the lock was taken degraded performance
      significantly.
      
      The change does the following:
      
      1) Disables interrupts around the CAS operation, and
      2) Changes the loads and stores to use the ordered completer, "o", on
      PA 2.0. "o" and "ma" with a zero offset are equivalent. The latter is
      accepted on both PA 1.X and 2.0.
      
      The use of ordered loads and stores probably makes no difference on all
      existing hardware, but it seemed pedantically correct. In particular, the CAS
      operation must complete before LDCW lock is released. As written before, a
      processor could reorder the operations.
      
      I don't believe the period interrupts are disabled is long enough to
      significantly increase interrupt latency. For example, the TLB insert code is
      longer. Worst case is a memory fault in the CAS operation.
      Signed-off-by: NJohn David Anglin <dave.anglin@bell.net>
      Cc: stable@vger.kernel.org # 3.13+
      Signed-off-by: NHelge Deller <deller@gmx.de>
      c776cd89
    • H
      parisc: ratelimit userspace segfault printing · fef47e2a
      Helge Deller 提交于
      Ratelimit printing of userspace segfaults and make it runtime
      configurable via the /proc/sys/debug/exception-trace variable. This
      should resolve syslog from growing way too fast and thus prevents
      possible system service attacks.
      Signed-off-by: NHelge Deller <deller@gmx.de>
      Cc: stable@vger.kernel.org # 3.13+
      fef47e2a
  4. 15 5月, 2014 10 次提交
    • V
      drm/i915: Increase WM memory latency values on SNB · e95a2f75
      Ville Syrjälä 提交于
      On SNB the BIOS provided WM memory latency values seem insufficient to
      handle high resolution displays.
      
      In this particular case the display mode was a 2560x1440@60Hz, which
      makes the pixel clock 241.5 MHz. It was empirically found that a memory
      latency value if 1.2 usec is enough to avoid underruns, whereas the BIOS
      provided value of 0.7 usec was clearly too low. Incidentally 1.2 usec
      is what the typical BIOS provided values are on IVB systems.
      
      Increase the WM memory latency values to at least 1.2 usec on SNB.
      Hopefully this won't have a significant effect on power consumption.
      
      v2: Increase the latency values regardless of the pixel clock
      
      Cc: Robert N <crshman@gmail.com>
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=70254Tested-by: NRobert Navarro <crshman@gmail.com>
      Tested-by: NVitaly Minko <vitaly.minko@gmail.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
      Signed-off-by: NJani Nikula <jani.nikula@intel.com>
      e95a2f75
    • A
      drm/i915: restore backlight precision when converting from ACPI · 721e82c0
      Aaron Lu 提交于
      When we set backlight on behalf of ACPI opregion, we will convert the
      backlight value in the 0-255 range defined in opregion to the actual
      hardware level. Commit 22505b82 (drm/i915: avoid brightness overflow
      when doing scale) is meant to fix the overflow problem when doing the
      conversion, but it also caused a problem that the converted hardware
      level doesn't quite represent the intended value: say user wants maximum
      backlight level(255 in opregion's range), then we will calculate the
      actual hardware level to be: level = freq / max * level, where freq is
      the hardware's max backlight level(937 on an user's box), and max and
      level are all 255. The converted value should be 937 but the above
      calculation will yield 765.
      
      To fix this issue, just use 64 bits to do the calculation to keep the
      precision and avoid overflow at the same time.
      
      Buglink: https://bugzilla.kernel.org/show_bug.cgi?id=72491Reported-by: NNico Schottelius <nico-bugzilla.kernel.org@schottelius.org>
      Reviewed-by: NChris Wilson <chris@chris-wilson.co.uk>
      Cc: stable@vger.kernel.org
      Signed-off-by: NAaron Lu <aaron.lu@intel.com>
      Signed-off-by: NJani Nikula <jani.nikula@intel.com>
      721e82c0
    • C
      drm/i915: Use the first mode if there is no preferred mode in the EDID · afba0b5a
      Chris Wilson 提交于
      This matches the algorithm used by earlier kernels when selecting the
      mode for the fbcon. And only if there is no modes at all, do we fall
      back to using the BIOS configuration. Seamless transition is still
      preserved (from the BIOS configuration to ours) so long as the BIOS has
      also chosen what we hope is the native configuration.
      Reported-by: NKnut Petersen <Knut_Petersen@t-online.de>
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=78655Reviewed-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      Tested-by: NKnut Petersen <Knut_Petersen@t-online.de>
      Signed-off-by: NChris Wilson <chris@chris-wilson.co.uk>
      [Jani: applied Chris' "Please imagine that I wrote this correctly."]
      Signed-off-by: NJani Nikula <jani.nikula@intel.com>
      afba0b5a
    • J
      drm/i915/dp: force eDP lane count to max available lanes on BDW · f4cdbc21
      Jani Nikula 提交于
      There are certain BDW high res eDP machines that regressed due to
      
      commit 38aecea0
      Author: Daniel Vetter <daniel.vetter@ffwll.ch>
      Date:   Mon Mar 3 11:18:10 2014 +0100
      
          drm/i915: reverse dp link param selection, prefer fast over wide again
      
      The commit lead to 2 lanes at 5.4 Gbps being used instead of 4 lanes at
      2.7 Gbps on the affected machines. Link training succeeded for both, but
      the screen remained blank with the former config. Further investigation
      showed that 4 lanes at 5.4 Gbps worked also.
      
      The root cause for the blank screen using 2 lanes remains unknown, but
      apparently the driver for a certain other operating system by default
      uses the max available lanes. Follow suit on Broadwell eDP, for at least
      until we figure out what is going on.
      
      Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=76711Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Reviewed-by: NRodrigo Vivi <rodrigo.vivi@gmail.com>
      Tested-by: NRodrigo Vivi <rodrigo.vivi@gmail.com>
      Signed-off-by: NJani Nikula <jani.nikula@intel.com>
      f4cdbc21
    • L
      x86-64, modify_ldt: Make support for 16-bit segments a runtime option · fa81511b
      Linus Torvalds 提交于
      Checkin:
      
      b3b42ac2 x86-64, modify_ldt: Ban 16-bit segments on 64-bit kernels
      
      disabled 16-bit segments on 64-bit kernels due to an information
      leak.  However, it does seem that people are genuinely using Wine to
      run old 16-bit Windows programs on Linux.
      
      A proper fix for this ("espfix64") is coming in the upcoming merge
      window, but as a temporary fix, create a sysctl to allow the
      administrator to re-enable support for 16-bit segments.
      
      It adds a "/proc/sys/abi/ldt16" sysctl that defaults to zero (off). If
      you hit this issue and care about your old Windows program more than
      you care about a kernel stack address information leak, you can do
      
         echo 1 > /proc/sys/abi/ldt16
      
      as root (add it to your startup scripts), and you should be ok.
      
      The sysctl table is only added if you have COMPAT support enabled on
      x86-64, but I assume anybody who runs old windows binaries very much
      does that ;)
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Link: http://lkml.kernel.org/r/CA%2B55aFw9BPoD10U1LfHbOMpHWZkvJTkMcfCs9s3urPr1YyWBxw@mail.gmail.com
      Cc: <stable@vger.kernel.org>
      fa81511b
    • J
      asm-generic: remove _STK_LIM_MAX · ffe6902b
      James Hogan 提交于
      _STK_LIM_MAX could be used to override the RLIMIT_STACK hard limit from
      an arch's include/uapi/asm-generic/resource.h file, but is no longer
      used since both parisc and metag removed the override. Therefore remove
      it entirely, setting the hard RLIMIT_STACK limit to RLIM_INFINITY
      directly in include/asm-generic/resource.h.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: linux-arch@vger.kernel.org
      Cc: Helge Deller <deller@gmx.de>
      Cc: John David Anglin <dave.anglin@bell.net>
      ffe6902b
    • J
      metag: Remove _STK_LIM_MAX override · c70458f5
      James Hogan 提交于
      Meta overrode _STK_LIM_MAX (the default RLIMIT_STACK hard limit) to
      256MB, apparently in an attempt to prevent setup_arg_pages's
      STACK_GROWSUP code from choosing the maximum stack size of 1GB, which is
      far too large for Meta's limited virtual address space and hits a BUG_ON
      (stack_top is usually 0x3ffff000).
      
      However the commit "metag: Reduce maximum stack size to 256MB" reduces
      the absolute stack size limit to a safe value for metag. This allows the
      default _STK_LIM_MAX override to be removed, bringing the default
      behaviour in line with all other architectures. Parisc in particular
      recently removed their override of _STK_LIMT_MAX in commit e0d8898d
      (parisc: remove _STK_LIM_MAX override) since it subtly affects stack
      allocation semantics in userland. Meta's uapi/asm/resource.h can now be
      removed and switch to using generic-y.
      Suggested-by: NHelge Deller <deller@gmx.de>
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: linux-metag@vger.kernel.org
      Cc: John David Anglin <dave.anglin@bell.net>
      c70458f5
    • H
      parisc,metag: Do not hardcode maximum userspace stack size · 042d27ac
      Helge Deller 提交于
      This patch affects only architectures where the stack grows upwards
      (currently parisc and metag only). On those do not hardcode the maximum
      initial stack size to 1GB for 32-bit processes, but make it configurable
      via a config option.
      
      The main problem with the hardcoded stack size is, that we have two
      memory regions which grow upwards: stack and heap. To keep most of the
      memory available for heap in a flexmap memory layout, it makes no sense
      to hard allocate up to 1GB of the memory for stack which can't be used
      as heap then.
      
      This patch makes the stack size for 32-bit processes configurable and
      uses 80MB as default value which has been in use during the last few
      years on parisc and which hasn't showed any problems yet.
      Signed-off-by: NHelge Deller <deller@gmx.de>
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: linux-parisc@vger.kernel.org
      Cc: linux-metag@vger.kernel.org
      Cc: John David Anglin <dave.anglin@bell.net>
      042d27ac
    • J
      metag: Reduce maximum stack size to 256MB · d71f290b
      James Hogan 提交于
      Specify the maximum stack size for arches where the stack grows upward
      (parisc and metag) in asm/processor.h rather than hard coding in
      fs/exec.c so that metag can specify a smaller value of 256MB rather than
      1GB.
      
      This fixes a BUG on metag if the RLIMIT_STACK hard limit is increased
      beyond a safe value by root. E.g. when starting a process after running
      "ulimit -H -s unlimited" it will then attempt to use a stack size of the
      maximum 1GB which is far too big for metag's limited user virtual
      address space (stack_top is usually 0x3ffff000):
      
      BUG: failure at fs/exec.c:589/shift_arg_pages()!
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: linux-parisc@vger.kernel.org
      Cc: linux-metag@vger.kernel.org
      Cc: John David Anglin <dave.anglin@bell.net>
      Cc: stable@vger.kernel.org # only needed for >= v3.9 (arch/metag)
      d71f290b
    • M
      metag: fix memory barriers · 2425ce84
      Mikulas Patocka 提交于
      Volatile access doesn't really imply the compiler barrier. Volatile access
      is only ordered with respect to other volatile accesses, it isn't ordered
      with respect to general memory accesses. Gcc may reorder memory accesses
      around volatile access, as we can see in this simple example (if we
      compile it with optimization, both increments of *b will be collapsed to
      just one):
      
      void fn(volatile int *a, long *b)
      {
      	(*b)++;
      	*a = 10;
      	(*b)++;
      }
      
      Consequently, we need the compiler barrier after a write to the volatile
      variable, to make sure that the compiler doesn't reorder the volatile
      write with something else.
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Cc: stable@vger.kernel.org
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      2425ce84
  5. 14 5月, 2014 2 次提交
  6. 13 5月, 2014 5 次提交