1. 07 1月, 2013 3 次提交
  2. 02 1月, 2013 2 次提交
  3. 20 12月, 2012 1 次提交
  4. 12 12月, 2012 1 次提交
  5. 29 11月, 2012 1 次提交
  6. 26 11月, 2012 1 次提交
  7. 22 11月, 2012 1 次提交
  8. 21 11月, 2012 1 次提交
  9. 13 11月, 2012 1 次提交
    • N
      ARM: 7573/1: idmap: use flush_cache_louis() and flush TLBs only when necessary · e4067855
      Nicolas Pitre 提交于
      Flushing the cache is needed for the hardware to see the idmap table
      and therefore can be done at init time.  On ARMv7 it is not necessary to
      flush L2 so flush_cache_louis() is used here instead.
      
      There is no point flushing the cache in setup_mm_for_reboot() as the
      caller should, and already is, taking care of this.  If switching the
      memory map requires a cache flush, then cpu_switch_mm() already includes
      that operation.
      
      What is not done by cpu_switch_mm() on ASID capable CPUs is TLB flushing
      as the whole point of the ASID is to tag the TLBs and avoid flushing them
      on a context switch.  Since we don't have a clean ASID for the identity
      mapping, we need to flush the TLB explicitly in that case.  Otherwise
      this is already performed by cpu_switch_mm().
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      e4067855
  10. 12 11月, 2012 1 次提交
  11. 09 11月, 2012 4 次提交
    • W
      ARM: mm: introduce present, faulting entries for PAGE_NONE · 26ffd0d4
      Will Deacon 提交于
      PROT_NONE mappings apply the page protection attributes defined by _P000
      which translate to PAGE_NONE for ARM. These attributes specify an XN,
      RDONLY pte that is inaccessible to userspace. However, on kernels
      configured without support for domains, such a pte *is* accessible to
      the kernel and can be read via get_user, allowing tasks to read
      PROT_NONE pages via syscalls such as read/write over a pipe.
      
      This patch introduces a new software pte flag, L_PTE_NONE, that is set
      to identify faulting, present entries.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      26ffd0d4
    • W
      ARM: mm: introduce L_PTE_VALID for page table entries · dbf62d50
      Will Deacon 提交于
      For long-descriptor translation table formats, the ARMv7 architecture
      defines the last two bits of the second- and third-level descriptors to
      be:
      
      	x0b	- Invalid
      	01b	- Block (second-level), Reserved (third-level)
      	11b	- Table (second-level), Page (third-level)
      
      This allows us to define L_PTE_PRESENT as (3 << 0) and use this value to
      create ptes directly. However, when determining whether a given pte
      value is present in the low-level page table accessors, we only need to
      check the least significant bit of the descriptor, allowing us to write
      faulting, present entries which are required for PROT_NONE mappings.
      
      This patch introduces L_PTE_VALID, which can be used to test whether a
      pte should fault, and updates the low-level page table accessors
      accordingly.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      dbf62d50
    • W
      ARM: mm: don't use the access flag permissions mechanism for classic MMU · 0cbbbad6
      Will Deacon 提交于
      The simplified access permissions model is not used for the classic MMU
      translation regime, so ensure that it is turned off in the sctlr prior
      to turning on address translation for ARMv7.
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      0cbbbad6
    • W
      ARM: mm: use pteval_t to represent page protection values · 864aa04c
      Will Deacon 提交于
      When updating the page protection map after calculating the user_pgprot
      value, the base protection map is temporarily stored in an unsigned long
      type, causing truncation of the protection bits when LPAE is enabled.
      This effectively means that calls to mprotect() will corrupt the upper
      page attributes, clearing the XN bit unconditionally.
      
      This patch uses pteval_t to store the intermediate protection values,
      preserving the upper bits for 64-bit descriptors.
      
      Cc: stable@vger.kernel.org
      Acked-by: NNicolas Pitre <nico@linaro.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      864aa04c
  12. 07 11月, 2012 1 次提交
  13. 06 11月, 2012 4 次提交
    • R
      ARM: implement debug_ll_io_init() · e5c5f2ad
      Rob Herring 提交于
      When using DEBUG_LL, the UART's (or other HW's) registers are mapped
      into early page tables based on the results of assembly macro addruart.
      Later, when the page tables are replaced, the same virtual address must
      remain valid. Historically, this has been ensured by using defines from
      <mach/iomap.h> in both the implementation of addruart, and the machine's
      .map_io() function. However, with the move to single zImage, we wish to
      remove <mach/iomap.h>. To enable this, the macro addruart may be used
      when constructing the late page tables too; addruart is exposed as a
      C function debug_ll_addr(), and used to set up the required mapping in
      debug_ll_io_init(), which may called on an opt-in basis from a machine's
      .map_io() function.
      Signed-off-by: NRob Herring <rob.herring@calxeda.com>
      [swarren: Mask map.virtual with PAGE_MASK. Checked for NULL results from
       debug_ll_addr (e.g. when selected UART isn't valid). Fixed compile when
       either !CONFIG_DEBUG_LL or CONFIG_DEBUG_SEMIHOSTING.]
      Signed-off-by: NStephen Warren <swarren@nvidia.com>
      Signed-off-by: NOlof Johansson <olof@lixom.net>
      e5c5f2ad
    • W
      ARM: mm: use bitmap operations when allocating new ASIDs · bf51bb82
      Will Deacon 提交于
      When allocating a new ASID, we must take care not to re-assign a
      reserved ASID-value to a new mm. This requires us to check each
      candidate ASID against those currently reserved by other cores before
      assigning a new ASID to the current mm.
      
      This patch improves the ASID allocation algorithm by using a
      bitmap-based approach. Rather than iterating over the reserved ASID
      array for each candidate ASID, we simply find the first zero bit,
      ensuring that those indices corresponding to reserved ASIDs are set
      when flushing during a rollover event.
      Tested-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      bf51bb82
    • W
      ARM: mm: avoid taking ASID spinlock on fastpath · 4b883160
      Will Deacon 提交于
      When scheduling a new mm, we take a spinlock so that we can:
      
        1. Safely allocate a new ASID, if required
        2. Update our active_asids field without worrying about parallel
           updates to reserved_asids
        3. Ensure that we flush our local TLB, if required
      
      However, this has the nasty affect of serialising context-switch across
      all CPUs in the system. The usual (fast) case is where the next mm has
      a valid ASID for the current generation. In such a scenario, we can
      avoid taking the lock and instead use atomic64_xchg to update the
      active_asids variable for the current CPU. If a rollover occurs on
      another CPU (which would take the lock), when copying the active_asids
      into the reserved_asids another atomic64_xchg is used to replace each
      active_asids with 0. The fast path can then detect this case and fall
      back to spinning on the lock.
      Tested-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      4b883160
    • W
      ARM: mm: remove IPI broadcasting on ASID rollover · b5466f87
      Will Deacon 提交于
      ASIDs are allocated to MMU contexts based on a rolling counter. This
      means that after 255 allocations we must invalidate all existing ASIDs
      via an expensive IPI mechanism to synchronise all of the online CPUs and
      ensure that all tasks execute with an ASID from the new generation.
      
      This patch changes the rollover behaviour so that we rely instead on the
      hardware broadcasting of the TLB invalidation to avoid the IPI calls.
      This works by keeping track of the active ASID on each core, which is
      then reserved in the case of a rollover so that currently scheduled
      tasks can continue to run. For cores without hardware TLB broadcasting,
      we keep track of pending flushes in a cpumask, so cores can flush their
      local TLB before scheduling a new mm.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      b5466f87
  14. 04 11月, 2012 1 次提交
  15. 25 10月, 2012 1 次提交
  16. 24 10月, 2012 2 次提交
  17. 18 10月, 2012 2 次提交
  18. 14 10月, 2012 1 次提交
    • R
      ARM: config: sort select statements alphanumerically · b1b3f49c
      Russell King 提交于
      As suggested by Andrew Morton:
      
        This is a pet peeve of mine.  Any time there's a long list of items
        (header file inclusions, kconfig entries, array initalisers, etc) and
        someone wants to add a new item, they *always* go and stick it at the
        end of the list.
      
        Guys, don't do this.  Either put the new item into a randomly-chosen
        position or, probably better, alphanumerically sort the list.
      
      lets sort all our select statements alphanumerically.  This commit was
      created by the following perl:
      
      while (<>) {
      	while (/\\\s*$/) {
      		$_ .= <>;
      	}
      	undef %selects if /^\s*config\s+/;
      	if (/^\s+select\s+(\w+).*/) {
      		if (defined($selects{$1})) {
      			if ($selects{$1} eq $_) {
      				print STDERR "Warning: removing duplicated $1 entry\n";
      			} else {
      				print STDERR "Error: $1 differently selected\n".
      					"\tOld: $selects{$1}\n".
      					"\tNew: $_\n";
      				exit 1;
      			}
      		}
      		$selects{$1} = $_;
      		next;
      	}
      	if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or
      			  /^endif/ or /^endchoice/)) {
      		foreach $k (sort (keys %selects)) {
      			print "$selects{$k}";
      		}
      		undef %selects;
      	}
      	print;
      }
      if (%selects) {
      	foreach $k (sort (keys %selects)) {
      		print "$selects{$k}";
      	}
      }
      
      It found two duplicates:
      
      Warning: removing duplicated S5P_SETUP_MIPIPHY entry
      Warning: removing duplicated HARDIRQS_SW_RESEND entry
      
      and they are identical duplicates, hence the shrinkage in the diffstat
      of two lines.
      
      We have four testers reporting success of this change (Tony, Stephen,
      Linus and Sekhar.)
      Acked-by: NJason Cooper <jason@lakedaemon.net>
      Acked-by: NTony Lindgren <tony@atomide.com>
      Acked-by: NStephen Warren <swarren@nvidia.com>
      Acked-by: NLinus Walleij <linus.walleij@linaro.org>
      Acked-by: NSekhar Nori <nsekhar@ti.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      b1b3f49c
  19. 10 10月, 2012 1 次提交
    • A
      ARM: Fix another build warning in arch/arm/mm/alignment.c · 31d2a638
      Arnd Bergmann 提交于
      One such warning was recently fixed in a761cebf "ARM: Fix build warning
      in arch/arm/mm/alignment.c" but only for the thumb2 case, this fixes
      the other half.
      
      arch/arm/mm/alignment.c: In function 'do_alignment':
      arch/arm/mm/alignment.c:327:15: error: 'offset.un' may be used uninitialized in this function
      arch/arm/mm/alignment.c:748:21: note: 'offset.un' was declared here
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Russell King <rmk+kernel@arm.linux.org.uk>
      31d2a638
  20. 09 10月, 2012 2 次提交
    • S
      readahead: fault retry breaks mmap file read random detection · 45cac65b
      Shaohua Li 提交于
      .fault now can retry.  The retry can break state machine of .fault.  In
      filemap_fault, if page is miss, ra->mmap_miss is increased.  In the second
      try, since the page is in page cache now, ra->mmap_miss is decreased.  And
      these are done in one fault, so we can't detect random mmap file access.
      
      Add a new flag to indicate .fault is tried once.  In the second try, skip
      ra->mmap_miss decreasing.  The filemap_fault state machine is ok with it.
      
      I only tested x86, didn't test other archs, but looks the change for other
      archs is obvious, but who knows :)
      Signed-off-by: NShaohua Li <shaohua.li@fusionio.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      45cac65b
    • M
      mm: replace vma prio_tree with an interval tree · 6b2dbba8
      Michel Lespinasse 提交于
      Implement an interval tree as a replacement for the VMA prio_tree.  The
      algorithms are similar to lib/interval_tree.c; however that code can't be
      directly reused as the interval endpoints are not explicitly stored in the
      VMA.  So instead, the common algorithm is moved into a template and the
      details (node type, how to get interval endpoints from the node, etc) are
      filled in using the C preprocessor.
      
      Once the interval tree functions are available, using them as a
      replacement to the VMA prio tree is a relatively simple, mechanical job.
      Signed-off-by: NMichel Lespinasse <walken@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Hillf Danton <dhillf@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6b2dbba8
  21. 02 10月, 2012 6 次提交
  22. 29 9月, 2012 2 次提交