1. 22 7月, 2007 1 次提交
  2. 22 5月, 2007 1 次提交
    • A
      Detach sched.h from mm.h · e8edc6e0
      Alexey Dobriyan 提交于
      First thing mm.h does is including sched.h solely for can_do_mlock() inline
      function which has "current" dereference inside. By dealing with can_do_mlock()
      mm.h can be detached from sched.h which is good. See below, why.
      
      This patch
      a) removes unconditional inclusion of sched.h from mm.h
      b) makes can_do_mlock() normal function in mm/mlock.c
      c) exports can_do_mlock() to not break compilation
      d) adds sched.h inclusions back to files that were getting it indirectly.
      e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were
         getting them indirectly
      
      Net result is:
      a) mm.h users would get less code to open, read, preprocess, parse, ... if
         they don't need sched.h
      b) sched.h stops being dependency for significant number of files:
         on x86_64 allmodconfig touching sched.h results in recompile of 4083 files,
         after patch it's only 3744 (-8.3%).
      
      Cross-compile tested on
      
      	all arm defconfigs, all mips defconfigs, all powerpc defconfigs,
      	alpha alpha-up
      	arm
      	i386 i386-up i386-defconfig i386-allnoconfig
      	ia64 ia64-up
      	m68k
      	mips
      	parisc parisc-up
      	powerpc powerpc-up
      	s390 s390-up
      	sparc sparc-up
      	sparc64 sparc64-up
      	um-x86_64
      	x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig
      
      as well as my two usual configs.
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e8edc6e0
  3. 03 5月, 2007 1 次提交
    • G
      [PATCH] x86-64: Remove duplicated code for reading control registers · fbc16f2c
      Glauber de Oliveira Costa 提交于
      On Tue, Mar 13, 2007 at 05:33:09AM -0700, Randy.Dunlap wrote:
      > On Tue, 13 Mar 2007, Glauber de Oliveira Costa wrote:
      >
      > > Tiny cleanup:
      > >
      > > In x86_64, the same functions for reading cr3 and writing cr{3,4} are
      > > defined in tlbflush.h and system.h, whith just a name change.
      > > The only difference is the clobbering of memory, which seems a safe, and
      > > even needed change for the write_cr4. This patch removes the duplicate.
      > > write_cr3() is moved to system.h for consistency.
      >
      > missing patch.....
      >
      thanks. Attached now
      
      --
      Glauber de Oliveira Costa
      Red Hat Inc.
      "Free as in Freedom"
      Signed-off-by: NAndi Kleen <ak@suse.de>
      fbc16f2c
  4. 26 9月, 2006 1 次提交
    • A
      [PATCH] Clean up and minor fixes to TLB flush · b1c78c0f
      Andi Kleen 提交于
      - Convert CR* accesses to dedicated inline functions and rewrite
      the rest as C inlines
      - Don't do a double flush for global flushes (pointed out by Zach Amsden)
      This was a bug workaround for old CPUs that don't do 64bit and is obsolete.
      - Add a proper memory clobber to invlpg
      - Remove an unused extern
      Signed-off-by: NAndi Kleen <ak@suse.de>
      b1c78c0f
  5. 26 4月, 2006 1 次提交
  6. 13 9月, 2005 1 次提交
    • A
      [PATCH] x86-64: Increase TLB flush array size · 2b4a0815
      Andi Kleen 提交于
      The generic TLB flush functions kept upto 506 pages per
      CPU to avoid too frequent IPIs.
      
      This value was done for the L1 cache of older x86 CPUs,
      but with modern CPUs it does not make much sense anymore.
      TLB flushing is slow enough that using the L2 cache is fine.
      
      This patch increases the flush array on x86-64 to cache
      5350 pages. That is roughly 20MB with 4K pages. It speeds
      up large munmaps in multithreaded processes on SMP considerably.
      
      The cost is roughly 42k of memory per CPU, which is reasonable.
      
      I only increased it on x86-64 for now, but it would probably
      make sense to increase it everywhere. Embedded architectures
      with SMP may keep it smaller to save some memory per CPU.
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      2b4a0815
  7. 29 7月, 2005 1 次提交
  8. 28 6月, 2005 1 次提交
    • A
      [PATCH] seccomp: tsc disable · ffaa8bd6
      Andrea Arcangeli 提交于
      I believe at least for seccomp it's worth to turn off the tsc, not just for
      HT but for the L2 cache too.  So it's up to you, either you turn it off
      completely (which isn't very nice IMHO) or I recommend to apply this below
      patch.
      
      This has been tested successfully on x86-64 against current cogito
      repository (i686 compiles so I didn't bother testing ;).  People selling
      the cpu through cpushare may appreciate this bit for a peace of mind.
      
      There's no way to get any timing info anymore with this applied
      (gettimeofday is forbidden of course).  The seccomp environment is
      completely deterministic so it can't be allowed to get timing info, it has
      to be deterministic so in the future I can enable a computing mode that
      does a parallel computing for each task with server side transparent
      checkpointing and verification that the output is the same from all the 2/3
      seller computers for each task, without the buyer even noticing (for now
      the verification is left to the buyer client side and there's no
      checkpointing, since that would require more kernel changes to track the
      dirty bits but it'll be easy to extend once the basic mode is finished).
      
      Eliminating a cold-cache read of the cr4 global variable will save one
      cacheline during the tlb flush while making the code per-cpu-safe at the
      same time.  Thanks to Mikael Pettersson for noticing the tlb flush wasn't
      per-cpu-safe.
      
      The global tlb flush can run from irq (IPI calling do_flush_tlb_all) but
      it'll be transparent to the switch_to code since the IPI won't make any
      change to the cr4 contents from the point of view of the interrupted code
      and since it's now all per-cpu stuff, it will not race.  So no need to
      disable irqs in switch_to slow path.
      Signed-off-by: NAndrea Arcangeli <andrea@cpushare.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ffaa8bd6
  9. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4