1. 19 8月, 2013 1 次提交
  2. 23 7月, 2013 3 次提交
  3. 31 5月, 2013 2 次提交
  4. 03 10月, 2012 1 次提交
  5. 20 7月, 2012 1 次提交
  6. 21 4月, 2012 1 次提交
    • L
      VM: add "vm_mmap()" helper function · 6be5ceb0
      Linus Torvalds 提交于
      This continues the theme started with vm_brk() and vm_munmap():
      vm_mmap() does the same thing as do_mmap(), but additionally does the
      required VM locking.
      
      This uninlines (and rewrites it to be clearer) do_mmap(), which sadly
      duplicates it in mm/mmap.c and mm/nommu.c.  But that way we don't have
      to export our internal do_mmap_pgoff() function.
      
      Some day we hopefully don't have to export do_mmap() either, if all
      modular users can become the simpler vm_mmap() instead.  We're actually
      very close to that already, with the notable exception of the (broken)
      use in i810, and a couple of stragglers in binfmt_elf.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6be5ceb0
  7. 01 11月, 2011 1 次提交
  8. 14 6月, 2011 2 次提交
    • T
      drm: Compare only lower 32 bits of framebuffer map offsets · 66aa6962
      Tormod Volden 提交于
      Drivers using multiple framebuffers got broken by commit
      41c2e75e which ignored the framebuffer
      (or register) map offset when looking for existing maps. The rationale
      was that the kernel-userspace ABI is fixed at a 32-bit offset, so the
      real offsets could not always be handed over for comparison.
      
      Instead of ignoring the offset we will compare the lower 32 bit. Drivers
      using multiple framebuffers should just make sure that the lower 32 bit
      are different. The existing drivers in question are practically limited
      to 32-bit systems so that should be fine for them.
      
      It is assumed that current drivers always specify a correct framebuffer
      map offset, even if this offset was ignored since above commit. So this
      patch should not change anything for drivers using only one framebuffer.
      
      Drivers needing multiple framebuffers with 64-bit map offsets will need
      to cook up something, for instance keeping an ID in the lower bit which
      is to be aligned away when it comes to using the offset.
      
      All of above applies to _DRM_REGISTERS as well.
      Signed-off-by: NTormod Volden <debian.tormod@gmail.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      66aa6962
    • J
      alpha/drm: Cleanup Alpha support in DRM generic code · 82ba3fef
      Jay Estabrook 提交于
      Remove an obsolete Alpha adjustment, and modify another,
      to go with the current Alpha architecture support.
      Signed-off-by: NJay Estabrook <jay.estabrook@gmail.com>
      Tested-by: NMatt Turner <mattst88@gmail.com>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      82ba3fef
  9. 12 8月, 2010 1 次提交
  10. 01 6月, 2010 2 次提交
  11. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  12. 16 3月, 2010 1 次提交
  13. 07 1月, 2010 1 次提交
  14. 18 9月, 2009 1 次提交
  15. 19 6月, 2009 1 次提交
  16. 11 6月, 2009 1 次提交
  17. 06 6月, 2009 1 次提交
  18. 04 6月, 2009 1 次提交
  19. 20 5月, 2009 1 次提交
    • B
      drm: Round size of SHM maps to PAGE_SIZE · b6741377
      Benjamin Herrenschmidt 提交于
      Currently, userspace can fail to obtain the SAREA mapping (among other
      reasons) if it passes SAREA_MAX to drmAddMap without aligning it to the
      page size. This breaks for example on PowerPC with 64K pages and radeon
      despite the kernel radeon actually doing the right rouding in the first
      place.
      
      The way SAREA_MAX is defined with a bunch of ifdef's and duplicated
      between libdrm and the X server is gross, ultimately it should be
      retrieved by userspace from the kernel, but in the meantime, we have
      plenty of existing userspace built with bad values that need to work.
      
      This patch works around broken userspace by rounding the requested size
      in drm_addmap_core() of any SHM map to the page size. Since the backing
      memory for SHM maps is also allocated within addmap_core, there is no
      danger of adjacent memory being exposed due to the increased map size.
      The only side effect is that drivers that previously tried to create or
      access SHM maps using a size < PAGE_SIZE and failed (getting -EINVAL),
      will now succeed at the cost of a little bit more memory used if that
      happens to be when the map is created.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
      b6741377
  20. 13 3月, 2009 4 次提交
    • D
      drm: Preserve SHMLBA bits in hash key for _DRM_SHM mappings. · f1a2a9b6
      David Miller 提交于
      Platforms such as sparc64 have D-cache aliasing issues.  We
      cannot allow virtual mappings in different contexts to be such
      that two cache lines can be loaded for the same backing data.
      Updates to one cache line won't be seen by accesses to the other
      cache line.
      
      Code in sparc64 and other architectures solve this problem by
      making sure that all userland mappings of MAP_SHARED objects have
      the same virtual address base.  They implement this by keying
      off of the page offset, and using that to choose a suitably
      consistent virtual address for mmap() requests.
      
      Making things even worse, getting this wrong on sparc64 can result
      in hangs during DRM lock acquisition.  This is because, at least on
      UltraSPARC-III, normal loads consult the D-cache but atomics such
      as 'cas' (which is what cmpxchg() is implement using) only consult
      the L2 cache.  So if a D-cache alias is inserted, the load can
      see different data than the atomic, and we'll loop forever because
      the atomic compare-and-exchange will never complete successfully.
      
      So to make this all work properly, we need to make sure that the
      hash address computed by drm_map_handle() preserves the SHMLBA
      relevant bits, and that's what this patch does for _DRM_SHM mappings.
      
      As a historical note, many years ago this bug didn't exist because we
      used to just use the low 32-bits of the address as the hash and just
      hope for the best.  This preserved the SHMLBA bits properly.  But when
      the hashtab code was added to DRM, this was no longer the case.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      f1a2a9b6
    • B
      drm: Make drm_local_map use a resource_size_t offset · 41c2e75e
      Benjamin Herrenschmidt 提交于
      This changes drm_local_map to use a resource_size for its "offset"
      member instead of an unsigned long, thus allowing 32-bit machines
      with a >32-bit physical address space to be able to store there
      their register or framebuffer addresses when those are above 4G,
      such as when using a PCI video card on a recent AMCC 440 SoC.
      
      This patch isn't as "trivial" as it sounds: A few functions needed
      to have some unsigned long/int changed to resource_size_t and a few
      printk's had to be adjusted.
      
      But also, because userspace isn't capable of passing such offsets,
      I had to modify drm_find_matching_map() to ignore the offset passed
      in for maps of type _DRM_FRAMEBUFFER or _DRM_REGISTERS.
      
      If we ever support multiple _DRM_FRAMEBUFFER or _DRM_REGISTERS maps
      for a given device, we might have to change that trick, but I don't
      think that happens on any current driver.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NDave Airlie <airlied@linux.ie>
      41c2e75e
    • B
      drm: Split drm_map and drm_local_map · f77d390c
      Benjamin Herrenschmidt 提交于
      Once upon a time, the DRM made the distinction between the drm_map
      data structure exchanged with user space and the drm_local_map used
      in the kernel.
      
      For some reasons, while the BSD port still has that "feature", the
      linux part abused drm_map for kernel internal usage as the local
      map only existed as a typedef of the struct drm_map.
      
      This patch fixes it by declaring struct drm_local_map separately
      (though its content is currently identical to the userspace variant),
      and changing the kernel code to only use that, except when it's a
      user<->kernel interface (ie. ioctl).
      
      This allows subsequent changes to the in-kernel format
      
      I've also replaced the use of drm_local_map_t with struct drm_local_map
      in a couple of places. Mostly by accident but they are the same (the
      former is a typedef of the later) and I have some remote plans and
      half finished patch to completely kill the drm_local_map_t typedef
      so I left those bits in.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NEric Anholt <eric@anholt.net>
      Signed-off-by: NDave Airlie <airlied@linux.ie>
      f77d390c
    • B
      drm: Use resource_size_t for drm_get_resource_{start, len} · d883f7f1
      Benjamin Herrenschmidt 提交于
      The DRM uses its own wrappers to obtain resources from PCI devices,
      which currently convert the resource_size_t into an unsigned long.
      
      This is broken on 32-bit platforms with >32-bit physical address
      space.
      
      This fixes them, along with a few occurences of unsigned long used
      to store such a resource in drivers.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NDave Airlie <airlied@linux.ie>
      d883f7f1
  21. 03 3月, 2009 1 次提交
  22. 29 12月, 2008 3 次提交
  23. 14 7月, 2008 1 次提交
    • D
      drm: reorganise drm tree to be more future proof. · c0e09200
      Dave Airlie 提交于
      With the coming of kernel based modesetting and the memory manager stuff,
      the everything in one directory approach was getting very ugly and
      starting to be unmanageable.
      
      This restructures the drm along the lines of other kernel components.
      
      It creates a drivers/gpu/drm directory and moves the hw drivers into
      subdirectores. It moves the includes into an include/drm, and
      sets up the unifdef for the userspace headers we should be exporting.
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      c0e09200
  24. 07 2月, 2008 3 次提交
  25. 20 10月, 2007 1 次提交
  26. 15 10月, 2007 2 次提交
  27. 25 8月, 2007 1 次提交