1. 04 2月, 2015 1 次提交
  2. 12 9月, 2014 1 次提交
  3. 25 1月, 2014 1 次提交
  4. 12 9月, 2013 1 次提交
    • D
      mm: vmstats: track TLB flush stats on UP too · 6df46865
      Dave Hansen 提交于
      The previous patch doing vmstats for TLB flushes ("mm: vmstats: tlb flush
      counters") effectively missed UP since arch/x86/mm/tlb.c is only compiled
      for SMP.
      
      UP systems do not do remote TLB flushes, so compile those counters out on
      UP.
      
      arch/x86/kernel/cpu/mtrr/generic.c calls __flush_tlb() directly.  This is
      probably an optimization since both the mtrr code and __flush_tlb() write
      cr4.  It would probably be safe to make that a flush_tlb_all() (and then
      get these statistics), but the mtrr code is ancient and I'm hesitant to
      touch it other than to just stick in the counters.
      
      [akpm@linux-foundation.org: tweak comments]
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6df46865
  5. 26 6月, 2013 2 次提交
    • H
      x86, asm, cleanup: Replace open-coded control register values with symbolic · a3d7b7dd
      H. Peter Anvin 提交于
      Clean up an unnecessary open-coded control register values.
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Link: http://lkml.kernel.org/n/tip-um7za1nzf6brb17o0h4om6e3@git.kernel.org
      a3d7b7dd
    • Y
      x86: Fix /proc/mtrr with base/size more than 44bits · d5c78673
      Yinghai Lu 提交于
      On one sytem that mtrr range is more then 44bits, in dmesg we have
      [    0.000000] MTRR default type: write-back
      [    0.000000] MTRR fixed ranges enabled:
      [    0.000000]   00000-9FFFF write-back
      [    0.000000]   A0000-BFFFF uncachable
      [    0.000000]   C0000-DFFFF write-through
      [    0.000000]   E0000-FFFFF write-protect
      [    0.000000] MTRR variable ranges enabled:
      [    0.000000]   0 [000080000000-0000FFFFFFFF] mask 3FFF80000000 uncachable
      [    0.000000]   1 [380000000000-38FFFFFFFFFF] mask 3F0000000000 uncachable
      [    0.000000]   2 [000099000000-000099FFFFFF] mask 3FFFFF000000 write-through
      [    0.000000]   3 [00009A000000-00009AFFFFFF] mask 3FFFFF000000 write-through
      [    0.000000]   4 [381FFA000000-381FFBFFFFFF] mask 3FFFFE000000 write-through
      [    0.000000]   5 [381FFC000000-381FFC0FFFFF] mask 3FFFFFF00000 write-through
      [    0.000000]   6 [0000AD000000-0000ADFFFFFF] mask 3FFFFF000000 write-through
      [    0.000000]   7 [0000BD000000-0000BDFFFFFF] mask 3FFFFF000000 write-through
      [    0.000000]   8 disabled
      [    0.000000]   9 disabled
      
      but /proc/mtrr report wrong:
      reg00: base=0x080000000 ( 2048MB), size= 2048MB, count=1: uncachable
      reg01: base=0x80000000000 (8388608MB), size=1048576MB, count=1: uncachable
      reg02: base=0x099000000 ( 2448MB), size=   16MB, count=1: write-through
      reg03: base=0x09a000000 ( 2464MB), size=   16MB, count=1: write-through
      reg04: base=0x81ffa000000 (8519584MB), size=   32MB, count=1: write-through
      reg05: base=0x81ffc000000 (8519616MB), size=    1MB, count=1: write-through
      reg06: base=0x0ad000000 ( 2768MB), size=   16MB, count=1: write-through
      reg07: base=0x0bd000000 ( 3024MB), size=   16MB, count=1: write-through
      reg08: base=0x09b000000 ( 2480MB), size=   16MB, count=1: write-combining
      
      so bit 44 and bit 45 get cut off.
      
      We have problems in arch/x86/kernel/cpu/mtrr/generic.c::generic_get_mtrr().
      1. for base, we miss cast base_lo to 64bit before shifting.
      Fix that by adding u64 casting.
      
      2. for size, it only can handle 44 bits aka 32bits + page_shift
      Fix that with 64bit mask instead of 32bit mask_lo, then range could be
      more than 44bits.
      At the same time, we need to update size_or_mask for old cpus that does
      support cpuid 0x80000008 to get phys_addr. Need to set high 32bits
      to all 1s, otherwise will not get correct size for them.
      
      Also fix mtrr_add_page: it should check base and (base + size - 1)
      instead of base and size, as base and size could be small but
      base + size could bigger enough to be out of boundary. We can
      use boot_cpu_data.x86_phys_bits directly to avoid size_or_mask.
      
      So When are we going to have size more than 44bits? that is 16TiB.
      
      after patch we have right ouput:
      reg00: base=0x080000000 ( 2048MB), size= 2048MB, count=1: uncachable
      reg01: base=0x380000000000 (58720256MB), size=1048576MB, count=1: uncachable
      reg02: base=0x099000000 ( 2448MB), size=   16MB, count=1: write-through
      reg03: base=0x09a000000 ( 2464MB), size=   16MB, count=1: write-through
      reg04: base=0x381ffa000000 (58851232MB), size=   32MB, count=1: write-through
      reg05: base=0x381ffc000000 (58851264MB), size=    1MB, count=1: write-through
      reg06: base=0x0ad000000 ( 2768MB), size=   16MB, count=1: write-through
      reg07: base=0x0bd000000 ( 3024MB), size=   16MB, count=1: write-through
      reg08: base=0x09b000000 ( 2480MB), size=   16MB, count=1: write-combining
      
      -v2: simply checking in mtrr_add_page according to hpa.
      
      [ hpa: This probably wants to go into -stable only after having sat in
        mainline for a bit.  It is not a regression. ]
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Link: http://lkml.kernel.org/r/1371162815-29931-1-git-send-email-yinghai@kernel.org
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      d5c78673
  6. 19 6月, 2013 1 次提交
  7. 31 5月, 2013 1 次提交
    • A
      Add arch_phys_wc_{add, del} to manipulate WC MTRRs if needed · d0d98eed
      Andy Lutomirski 提交于
      Several drivers currently use mtrr_add through various #ifdef guards
      and/or drm wrappers.  The vast majority of them want to add WC MTRRs
      on x86 systems and don't actually need the MTRR if PAT (i.e.
      ioremap_wc, etc) are working.
      
      arch_phys_wc_add and arch_phys_wc_del are new functions, available
      on all architectures and configurations, that add WC MTRRs on x86 if
      needed (and handle errors) and do nothing at all otherwise.  They're
      also easier to use than mtrr_add and mtrr_del, so the call sites can
      be simplified.
      
      As an added benefit, this will avoid wasting MTRRs and possibly
      warning pointlessly on PAT-supporting systems.
      Reviewed-by: NDaniel Vetter <daniel.vetter@ffwll.ch>
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Signed-off-by: NDave Airlie <airlied@redhat.com>
      d0d98eed
  8. 21 1月, 2013 1 次提交
  9. 07 12月, 2012 1 次提交
  10. 15 11月, 2012 1 次提交
  11. 10 7月, 2012 2 次提交
  12. 31 5月, 2012 1 次提交
  13. 29 3月, 2012 1 次提交
  14. 02 3月, 2012 1 次提交
  15. 05 12月, 2011 2 次提交
  16. 26 8月, 2011 1 次提交
  17. 02 7月, 2011 1 次提交
  18. 28 6月, 2011 2 次提交
    • S
      x86, mtrr: use stop_machine APIs for doing MTRR rendezvous · 192d8857
      Suresh Siddha 提交于
      MTRR rendezvous sequence is not implemened using stop_machine() before, as this
      gets called both from the process context aswell as the cpu online paths
      (where the cpu has not come online and the interrupts are disabled etc).
      
      Now that we have a new stop_machine_from_inactive_cpu() API, use it for
      rendezvous during mtrr init of a logical processor that is coming online.
      
      For the rest (runtime MTRR modification, system boot, resume paths), use
      stop_machine() to implement the rendezvous sequence. This will consolidate and
      cleanup the code.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Link: http://lkml.kernel.org/r/20110623182057.076997177@sbsiddha-MOBL3.sc.intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      192d8857
    • S
      x86, mtrr: lock stop machine during MTRR rendezvous sequence · 6d3321e8
      Suresh Siddha 提交于
      MTRR rendezvous sequence using stop_one_cpu_nowait() can potentially
      happen in parallel with another system wide rendezvous using
      stop_machine(). This can lead to deadlock (The order in which
      works are queued can be different on different cpu's. Some cpu's
      will be running the first rendezvous handler and others will be running
      the second rendezvous handler. Each set waiting for the other set to join
      for the system wide rendezvous, leading to a deadlock).
      
      MTRR rendezvous sequence is not implemented using stop_machine() as this
      gets called both from the process context aswell as the cpu online paths
      (where the cpu has not come online and the interrupts are disabled etc).
      stop_machine() works with only online cpus.
      
      For now, take the stop_machine mutex in the MTRR rendezvous sequence that
      gets called from an online cpu (here we are in the process context
      and can potentially sleep while taking the mutex). And the MTRR rendezvous
      that gets triggered during cpu online doesn't need to take this stop_machine
      lock (as the stop_machine() already ensures that there is no cpu hotplug
      going on in parallel by doing get_online_cpus())
      
          TBD: Pursue a cleaner solution of extending the stop_machine()
               infrastructure to handle the case where the calling cpu is
               still not online and use this for MTRR rendezvous sequence.
      
      fixes: https://bugzilla.novell.com/show_bug.cgi?id=672008Reported-by: NVadim Kotelnikov <vadimuzzz@inbox.ru>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Link: http://lkml.kernel.org/r/20110623182056.807230326@sbsiddha-MOBL3.sc.intel.com
      Cc: stable@kernel.org # 2.6.35+, backport a week or two after this gets more testing in mainline
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      6d3321e8
  19. 30 3月, 2011 1 次提交
    • S
      x86, mtrr, pat: Fix one cpu getting out of sync during resume · 84ac7cdb
      Suresh Siddha 提交于
      On laptops with core i5/i7, there were reports that after resume
      graphics workloads were performing poorly on a specific AP, while
      the other cpu's were ok. This was observed on a 32bit kernel
      specifically.
      
      Debug showed that the PAT init was not happening on that AP
      during resume and hence it contributing to the poor workload
      performance on that cpu.
      
      On this system, resume flow looked like this:
      
      1. BP starts the resume sequence and we reinit BP's MTRR's/PAT
         early on using mtrr_bp_restore()
      
      2. Resume sequence brings all AP's online
      
      3. Resume sequence now kicks off the MTRR reinit on all the AP's.
      
      4. For some reason, between point 2 and 3, we moved from BP
         to one of the AP's. My guess is that printk() during resume
         sequence is contributing to this. We don't see similar
         behavior with the 64bit kernel but there is no guarantee that
         at this point the remaining resume sequence (after AP's bringup)
         has to happen on BP.
      
      5. set_mtrr() was assuming that we are still on BP and skipped the
         MTRR/PAT init on that cpu (because of 1 above)
      
      6. But we were on an AP and this led to not reprogramming PAT
         on this cpu leading to bad performance.
      
      Fix this by doing unconditional mtrr_if->set_all() in set_mtrr()
      during MTRR/PAT init. This might be unnecessary if we are still
      running on BP. But it is of no harm and will guarantee that after
      resume, all the cpu's will be in sync with respect to the
      MTRR/PAT registers.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      LKML-Reference: <1301438292-28370-1-git-send-email-eric@anholt.net>
      Signed-off-by: NEric Anholt <eric@anholt.net>
      Tested-by: NKeith Packard <keithp@keithp.com>
      Cc: stable@kernel.org	[v2.6.32+]
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      84ac7cdb
  20. 24 3月, 2011 1 次提交
    • R
      x86: Use syscore_ops instead of sysdev classes and sysdevs · f3c6ea1b
      Rafael J. Wysocki 提交于
      Some subsystems in the x86 tree need to carry out suspend/resume and
      shutdown operations with one CPU on-line and interrupts disabled and
      they define sysdev classes and sysdevs or sysdev drivers for this
      purpose.  This leads to unnecessarily complicated code and excessive
      memory usage, so switch them to using struct syscore_ops objects for
      this purpose instead.
      
      Generally, there are three categories of subsystems that use
      sysdevs for implementing PM operations: (1) subsystems whose
      suspend/resume callbacks ignore their arguments entirely (the
      majority), (2) subsystems whose suspend/resume callbacks use their
      struct sys_device argument, but don't really need to do that,
      because they can be implemented differently in an arguably simpler
      way (io_apic.c), and (3) subsystems whose suspend/resume callbacks
      use their struct sys_device argument, but the value of that argument
      is always the same and could be ignored (microcode_core.c).  In all
      of these cases the subsystems in question may be readily converted to
      using struct syscore_ops objects for power management and shutdown.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      f3c6ea1b
  21. 18 3月, 2011 1 次提交
  22. 03 2月, 2011 1 次提交
    • S
      x86, mtrr: Avoid MTRR reprogramming on BP during boot on UP platforms · f7448548
      Suresh Siddha 提交于
      Markus Kohn ran into a hard hang regression on an acer aspire
      1310, when acpi is enabled. git bisect showed the following
      commit as the bad one that introduced the boot regression.
      
      	commit d0af9eed
      	Author: Suresh Siddha <suresh.b.siddha@intel.com>
      	Date:   Wed Aug 19 18:05:36 2009 -0700
      
      	    x86, pat/mtrr: Rendezvous all the cpus for MTRR/PAT init
      
      Because of the UP configuration of that platform,
      native_smp_prepare_cpus() bailed out (in smp_sanity_check())
      before doing the set_mtrr_aps_delayed_init()
      
      Further down the boot path, native_smp_cpus_done() will call the
      delayed MTRR initialization for the AP's (mtrr_aps_init()) with
      mtrr_aps_delayed_init not set. This resulted in the boot
      processor reprogramming its MTRR's to the values seen during the
      start of the OS boot. While this is not needed ideally, this
      shouldn't have caused any side-effects. This is because the
      reprogramming of MTRR's (set_mtrr_state() that gets called via
      set_mtrr()) will check if the live register contents are
      different from what is being asked to write and will do the actual
      write only if they are different.
      
      BP's mtrr state is read during the start of the OS boot and
      typically nothing would have changed when we ask to reprogram it
      on BP again because of the above scenario on an UP platform. So
      on a normal UP platform no reprogramming of BP MTRR MSR's
      happens and all is well.
      
      However, on this platform, bios seems to be modifying the fixed
      mtrr range registers between the start of OS boot and when we
      double check the live registers for reprogramming BP MTRR
      registers. And as the live registers are modified, we end up
      reprogramming the MTRR's to the state seen during the start of
      the OS boot.
      
      During ACPI initialization, something in the bios (probably smi
      handler?) don't like this fact and results in a hard lockup.
      
      We didn't see this boot hang issue on this platform before the
      commit d0af9eed, because only
      the AP's (if any) will program its MTRR's to the value that BP
      had at the start of the OS boot.
      
      Fix this issue by checking mtrr_aps_delayed_init before
      continuing further in the mtrr_aps_init(). Now, only AP's (if
      any) will program its MTRR's to the BP values during boot.
      
      Addresses https://bugzilla.novell.com/show_bug.cgi?id=623393
      
        [ By the way, this behavior of the bios modifying MTRR's after the start
          of the OS boot is not common and the kernel is not prepared to
          handle this situation well. Irrespective of this issue, during
          suspend/resume, linux kernel will try to reprogram the BP's MTRR values
          to the values seen during the start of the OS boot. So suspend/resume might
          be already broken on this platform for all linux kernel versions. ]
      Reported-and-bisected-by: NMarkus Kohn <jabber@gmx.org>
      Tested-by: NMarkus Kohn <jabber@gmx.org>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: Thomas Renninger <trenn@novell.com>
      Cc: Rafael Wysocki <rjw@novell.com>
      Cc: Venkatesh Pallipadi <venki@google.com>
      Cc: stable@kernel.org # [v2.6.32+]
      LKML-Reference: <1296694975.4418.402.camel@sbsiddha-MOBL3.sc.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f7448548
  23. 02 10月, 2010 1 次提交
    • A
      x86, mtrr: Assume SYS_CFG[Tom2ForceMemTypeWB] exists on all future AMD CPUs · 3fdbf004
      Andreas Herrmann 提交于
      Instead of adapting the CPU family check in amd_special_default_mtrr()
      for each new CPU family assume that all new AMD CPUs support the
      necessary bits in SYS_CFG MSR.
      
      Tom2Enabled is architectural (defined in APM Vol.2).
      Tom2ForceMemTypeWB is defined in all BKDGs starting with K8 NPT.
      In pre K8-NPT BKDG this bit is reserved (read as zero).
      
      W/o this adaption Linux would unnecessarily complain about bad MTRR
      settings on every new AMD CPU family, e.g.
      
      [    0.000000] WARNING: BIOS bug: CPU MTRRs don't cover all of memory, losing 4863MB of RAM.
      
      Cc: stable@kernel.org # .32.x, .35.x
      Signed-off-by: NAndreas Herrmann <andreas.herrmann3@amd.com>
      LKML-Reference: <20100930123235.GB20545@loge.amd.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      3fdbf004
  24. 11 9月, 2010 2 次提交
  25. 31 7月, 2010 1 次提交
    • S
      x86, mtrr: Use stop machine context to rendezvous all the cpu's · 68f202e4
      Suresh Siddha 提交于
      Use the stop machine context rather than IPI's to rendezvous all the cpus for
      MTRR initialization that happens during cpu bringup or for MTRR modifications
      during runtime.
      
      This avoids deadlock scenario (reported by Prarit) like:
      
      cpu A holds a read_lock (tasklist_lock for example) with irqs enabled
      cpu B waits for the same lock with irqs disabled using write_lock_irq
      cpu C doing set_mtrr() (during AP bringup for example), which will try to
      rendezvous all the cpus using IPI's
      
      This will result in C and A come to the rendezvous point and waiting
      for B. B is stuck forever waiting for the lock and thus not
      reaching the rendezvous point.
      
      Using stop cpu (run in the process context of per cpu based keventd) to do
      this rendezvous, avoids this deadlock scenario.
      
      Also make sure all the cpu's are in the rendezvous handler before we proceed
      with the local_irq_save() on each cpu. This lock step disabling irqs on all
      the cpus will avoid other deadlock scenarios (for example involving
      with the blocking smp_call_function's etc).
      
         [ This problem is very old. Marking -stable only for 2.6.35 as the
           stop_one_cpu_nowait() API is present only in 2.6.35. Any older
           kernel interested in this fix need to do some more work in backporting
           this patch. ]
      Reported-by: NPrarit Bhargava <prarit@redhat.com>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      LKML-Reference: <1280515602.2682.10.camel@sbsiddha-MOBL3.sc.intel.com>
      Acked-by: NPrarit Bhargava <prarit@redhat.com>
      Cc: stable@kernel.org	[2.6.35]
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      68f202e4
  26. 21 7月, 2010 1 次提交
  27. 16 7月, 2010 1 次提交
  28. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  29. 06 3月, 2010 1 次提交
  30. 17 2月, 2010 1 次提交
  31. 16 2月, 2010 1 次提交
  32. 11 2月, 2010 2 次提交
  33. 04 2月, 2010 1 次提交
  34. 02 2月, 2010 1 次提交
    • E
      x86, mtrr: Constify struct mtrr_ops · 3b9cfc0a
      Emese Revfy 提交于
      This is part of the ops structure constification
      effort started by Arjan van de Ven et al.
      
      Benefits of this constification:
      
       * prevents modification of data that is shared
         (referenced) by many other structure instances
         at runtime
      
       * detects/prevents accidental (but not intentional)
         modification attempts on archs that enforce
         read-only kernel data at runtime
      
       * potentially better optimized code as the compiler
         can assume that the const data cannot be changed
      
       * the compiler/linker move const data into .rodata
         and therefore exclude them from false sharing
      Signed-off-by: NEmese Revfy <re.emese@gmail.com>
      LKML-Reference: <4B65D712.3080804@gmail.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      3b9cfc0a