1. 10 7月, 2012 2 次提交
  2. 31 5月, 2012 1 次提交
  3. 29 3月, 2012 1 次提交
  4. 02 3月, 2012 1 次提交
  5. 05 12月, 2011 2 次提交
  6. 26 8月, 2011 1 次提交
  7. 02 7月, 2011 1 次提交
  8. 28 6月, 2011 2 次提交
    • S
      x86, mtrr: use stop_machine APIs for doing MTRR rendezvous · 192d8857
      Suresh Siddha 提交于
      MTRR rendezvous sequence is not implemened using stop_machine() before, as this
      gets called both from the process context aswell as the cpu online paths
      (where the cpu has not come online and the interrupts are disabled etc).
      
      Now that we have a new stop_machine_from_inactive_cpu() API, use it for
      rendezvous during mtrr init of a logical processor that is coming online.
      
      For the rest (runtime MTRR modification, system boot, resume paths), use
      stop_machine() to implement the rendezvous sequence. This will consolidate and
      cleanup the code.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Link: http://lkml.kernel.org/r/20110623182057.076997177@sbsiddha-MOBL3.sc.intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      192d8857
    • S
      x86, mtrr: lock stop machine during MTRR rendezvous sequence · 6d3321e8
      Suresh Siddha 提交于
      MTRR rendezvous sequence using stop_one_cpu_nowait() can potentially
      happen in parallel with another system wide rendezvous using
      stop_machine(). This can lead to deadlock (The order in which
      works are queued can be different on different cpu's. Some cpu's
      will be running the first rendezvous handler and others will be running
      the second rendezvous handler. Each set waiting for the other set to join
      for the system wide rendezvous, leading to a deadlock).
      
      MTRR rendezvous sequence is not implemented using stop_machine() as this
      gets called both from the process context aswell as the cpu online paths
      (where the cpu has not come online and the interrupts are disabled etc).
      stop_machine() works with only online cpus.
      
      For now, take the stop_machine mutex in the MTRR rendezvous sequence that
      gets called from an online cpu (here we are in the process context
      and can potentially sleep while taking the mutex). And the MTRR rendezvous
      that gets triggered during cpu online doesn't need to take this stop_machine
      lock (as the stop_machine() already ensures that there is no cpu hotplug
      going on in parallel by doing get_online_cpus())
      
          TBD: Pursue a cleaner solution of extending the stop_machine()
               infrastructure to handle the case where the calling cpu is
               still not online and use this for MTRR rendezvous sequence.
      
      fixes: https://bugzilla.novell.com/show_bug.cgi?id=672008Reported-by: NVadim Kotelnikov <vadimuzzz@inbox.ru>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Link: http://lkml.kernel.org/r/20110623182056.807230326@sbsiddha-MOBL3.sc.intel.com
      Cc: stable@kernel.org # 2.6.35+, backport a week or two after this gets more testing in mainline
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      6d3321e8
  9. 30 3月, 2011 1 次提交
    • S
      x86, mtrr, pat: Fix one cpu getting out of sync during resume · 84ac7cdb
      Suresh Siddha 提交于
      On laptops with core i5/i7, there were reports that after resume
      graphics workloads were performing poorly on a specific AP, while
      the other cpu's were ok. This was observed on a 32bit kernel
      specifically.
      
      Debug showed that the PAT init was not happening on that AP
      during resume and hence it contributing to the poor workload
      performance on that cpu.
      
      On this system, resume flow looked like this:
      
      1. BP starts the resume sequence and we reinit BP's MTRR's/PAT
         early on using mtrr_bp_restore()
      
      2. Resume sequence brings all AP's online
      
      3. Resume sequence now kicks off the MTRR reinit on all the AP's.
      
      4. For some reason, between point 2 and 3, we moved from BP
         to one of the AP's. My guess is that printk() during resume
         sequence is contributing to this. We don't see similar
         behavior with the 64bit kernel but there is no guarantee that
         at this point the remaining resume sequence (after AP's bringup)
         has to happen on BP.
      
      5. set_mtrr() was assuming that we are still on BP and skipped the
         MTRR/PAT init on that cpu (because of 1 above)
      
      6. But we were on an AP and this led to not reprogramming PAT
         on this cpu leading to bad performance.
      
      Fix this by doing unconditional mtrr_if->set_all() in set_mtrr()
      during MTRR/PAT init. This might be unnecessary if we are still
      running on BP. But it is of no harm and will guarantee that after
      resume, all the cpu's will be in sync with respect to the
      MTRR/PAT registers.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      LKML-Reference: <1301438292-28370-1-git-send-email-eric@anholt.net>
      Signed-off-by: NEric Anholt <eric@anholt.net>
      Tested-by: NKeith Packard <keithp@keithp.com>
      Cc: stable@kernel.org	[v2.6.32+]
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      84ac7cdb
  10. 24 3月, 2011 1 次提交
    • R
      x86: Use syscore_ops instead of sysdev classes and sysdevs · f3c6ea1b
      Rafael J. Wysocki 提交于
      Some subsystems in the x86 tree need to carry out suspend/resume and
      shutdown operations with one CPU on-line and interrupts disabled and
      they define sysdev classes and sysdevs or sysdev drivers for this
      purpose.  This leads to unnecessarily complicated code and excessive
      memory usage, so switch them to using struct syscore_ops objects for
      this purpose instead.
      
      Generally, there are three categories of subsystems that use
      sysdevs for implementing PM operations: (1) subsystems whose
      suspend/resume callbacks ignore their arguments entirely (the
      majority), (2) subsystems whose suspend/resume callbacks use their
      struct sys_device argument, but don't really need to do that,
      because they can be implemented differently in an arguably simpler
      way (io_apic.c), and (3) subsystems whose suspend/resume callbacks
      use their struct sys_device argument, but the value of that argument
      is always the same and could be ignored (microcode_core.c).  In all
      of these cases the subsystems in question may be readily converted to
      using struct syscore_ops objects for power management and shutdown.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      f3c6ea1b
  11. 18 3月, 2011 1 次提交
  12. 03 2月, 2011 1 次提交
    • S
      x86, mtrr: Avoid MTRR reprogramming on BP during boot on UP platforms · f7448548
      Suresh Siddha 提交于
      Markus Kohn ran into a hard hang regression on an acer aspire
      1310, when acpi is enabled. git bisect showed the following
      commit as the bad one that introduced the boot regression.
      
      	commit d0af9eed
      	Author: Suresh Siddha <suresh.b.siddha@intel.com>
      	Date:   Wed Aug 19 18:05:36 2009 -0700
      
      	    x86, pat/mtrr: Rendezvous all the cpus for MTRR/PAT init
      
      Because of the UP configuration of that platform,
      native_smp_prepare_cpus() bailed out (in smp_sanity_check())
      before doing the set_mtrr_aps_delayed_init()
      
      Further down the boot path, native_smp_cpus_done() will call the
      delayed MTRR initialization for the AP's (mtrr_aps_init()) with
      mtrr_aps_delayed_init not set. This resulted in the boot
      processor reprogramming its MTRR's to the values seen during the
      start of the OS boot. While this is not needed ideally, this
      shouldn't have caused any side-effects. This is because the
      reprogramming of MTRR's (set_mtrr_state() that gets called via
      set_mtrr()) will check if the live register contents are
      different from what is being asked to write and will do the actual
      write only if they are different.
      
      BP's mtrr state is read during the start of the OS boot and
      typically nothing would have changed when we ask to reprogram it
      on BP again because of the above scenario on an UP platform. So
      on a normal UP platform no reprogramming of BP MTRR MSR's
      happens and all is well.
      
      However, on this platform, bios seems to be modifying the fixed
      mtrr range registers between the start of OS boot and when we
      double check the live registers for reprogramming BP MTRR
      registers. And as the live registers are modified, we end up
      reprogramming the MTRR's to the state seen during the start of
      the OS boot.
      
      During ACPI initialization, something in the bios (probably smi
      handler?) don't like this fact and results in a hard lockup.
      
      We didn't see this boot hang issue on this platform before the
      commit d0af9eed, because only
      the AP's (if any) will program its MTRR's to the value that BP
      had at the start of the OS boot.
      
      Fix this issue by checking mtrr_aps_delayed_init before
      continuing further in the mtrr_aps_init(). Now, only AP's (if
      any) will program its MTRR's to the BP values during boot.
      
      Addresses https://bugzilla.novell.com/show_bug.cgi?id=623393
      
        [ By the way, this behavior of the bios modifying MTRR's after the start
          of the OS boot is not common and the kernel is not prepared to
          handle this situation well. Irrespective of this issue, during
          suspend/resume, linux kernel will try to reprogram the BP's MTRR values
          to the values seen during the start of the OS boot. So suspend/resume might
          be already broken on this platform for all linux kernel versions. ]
      Reported-and-bisected-by: NMarkus Kohn <jabber@gmx.org>
      Tested-by: NMarkus Kohn <jabber@gmx.org>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: Thomas Renninger <trenn@novell.com>
      Cc: Rafael Wysocki <rjw@novell.com>
      Cc: Venkatesh Pallipadi <venki@google.com>
      Cc: stable@kernel.org # [v2.6.32+]
      LKML-Reference: <1296694975.4418.402.camel@sbsiddha-MOBL3.sc.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f7448548
  13. 02 10月, 2010 1 次提交
    • A
      x86, mtrr: Assume SYS_CFG[Tom2ForceMemTypeWB] exists on all future AMD CPUs · 3fdbf004
      Andreas Herrmann 提交于
      Instead of adapting the CPU family check in amd_special_default_mtrr()
      for each new CPU family assume that all new AMD CPUs support the
      necessary bits in SYS_CFG MSR.
      
      Tom2Enabled is architectural (defined in APM Vol.2).
      Tom2ForceMemTypeWB is defined in all BKDGs starting with K8 NPT.
      In pre K8-NPT BKDG this bit is reserved (read as zero).
      
      W/o this adaption Linux would unnecessarily complain about bad MTRR
      settings on every new AMD CPU family, e.g.
      
      [    0.000000] WARNING: BIOS bug: CPU MTRRs don't cover all of memory, losing 4863MB of RAM.
      
      Cc: stable@kernel.org # .32.x, .35.x
      Signed-off-by: NAndreas Herrmann <andreas.herrmann3@amd.com>
      LKML-Reference: <20100930123235.GB20545@loge.amd.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      3fdbf004
  14. 11 9月, 2010 2 次提交
  15. 31 7月, 2010 1 次提交
    • S
      x86, mtrr: Use stop machine context to rendezvous all the cpu's · 68f202e4
      Suresh Siddha 提交于
      Use the stop machine context rather than IPI's to rendezvous all the cpus for
      MTRR initialization that happens during cpu bringup or for MTRR modifications
      during runtime.
      
      This avoids deadlock scenario (reported by Prarit) like:
      
      cpu A holds a read_lock (tasklist_lock for example) with irqs enabled
      cpu B waits for the same lock with irqs disabled using write_lock_irq
      cpu C doing set_mtrr() (during AP bringup for example), which will try to
      rendezvous all the cpus using IPI's
      
      This will result in C and A come to the rendezvous point and waiting
      for B. B is stuck forever waiting for the lock and thus not
      reaching the rendezvous point.
      
      Using stop cpu (run in the process context of per cpu based keventd) to do
      this rendezvous, avoids this deadlock scenario.
      
      Also make sure all the cpu's are in the rendezvous handler before we proceed
      with the local_irq_save() on each cpu. This lock step disabling irqs on all
      the cpus will avoid other deadlock scenarios (for example involving
      with the blocking smp_call_function's etc).
      
         [ This problem is very old. Marking -stable only for 2.6.35 as the
           stop_one_cpu_nowait() API is present only in 2.6.35. Any older
           kernel interested in this fix need to do some more work in backporting
           this patch. ]
      Reported-by: NPrarit Bhargava <prarit@redhat.com>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      LKML-Reference: <1280515602.2682.10.camel@sbsiddha-MOBL3.sc.intel.com>
      Acked-by: NPrarit Bhargava <prarit@redhat.com>
      Cc: stable@kernel.org	[2.6.35]
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      68f202e4
  16. 21 7月, 2010 1 次提交
  17. 16 7月, 2010 1 次提交
  18. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  19. 06 3月, 2010 1 次提交
  20. 17 2月, 2010 1 次提交
  21. 16 2月, 2010 1 次提交
  22. 11 2月, 2010 2 次提交
  23. 04 2月, 2010 1 次提交
  24. 02 2月, 2010 1 次提交
    • E
      x86, mtrr: Constify struct mtrr_ops · 3b9cfc0a
      Emese Revfy 提交于
      This is part of the ops structure constification
      effort started by Arjan van de Ven et al.
      
      Benefits of this constification:
      
       * prevents modification of data that is shared
         (referenced) by many other structure instances
         at runtime
      
       * detects/prevents accidental (but not intentional)
         modification attempts on archs that enforce
         read-only kernel data at runtime
      
       * potentially better optimized code as the compiler
         can assume that the const data cannot be changed
      
       * the compiler/linker move const data into .rodata
         and therefore exclude them from false sharing
      Signed-off-by: NEmese Revfy <re.emese@gmail.com>
      LKML-Reference: <4B65D712.3080804@gmail.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      3b9cfc0a
  25. 16 12月, 2009 1 次提交
    • A
      tree-wide: convert open calls to remove spaces to skip_spaces() lib function · e7d2860b
      André Goddard Rosa 提交于
      Makes use of skip_spaces() defined in lib/string.c for removing leading
      spaces from strings all over the tree.
      
      It decreases lib.a code size by 47 bytes and reuses the function tree-wide:
         text    data     bss     dec     hex filename
        64688     584     592   65864   10148 (TOTALS-BEFORE)
        64641     584     592   65817   10119 (TOTALS-AFTER)
      
      Also, while at it, if we see (*str && isspace(*str)), we can be sure to
      remove the first condition (*str) as the second one (isspace(*str)) also
      evaluates to 0 whenever *str == 0, making it redundant. In other words,
      "a char equals zero is never a space".
      
      Julia Lawall tried the semantic patch (http://coccinelle.lip6.fr) below,
      and found occurrences of this pattern on 3 more files:
          drivers/leds/led-class.c
          drivers/leds/ledtrig-timer.c
          drivers/video/output.c
      
      @@
      expression str;
      @@
      
      ( // ignore skip_spaces cases
      while (*str &&  isspace(*str)) { \(str++;\|++str;\) }
      |
      - *str &&
      isspace(*str)
      )
      Signed-off-by: NAndré Goddard Rosa <andre.goddard@gmail.com>
      Cc: Julia Lawall <julia@diku.dk>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Richard Purdie <rpurdie@rpsys.net>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Kyle McMartin <kyle@mcmartin.ca>
      Cc: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
      Cc: David Howells <dhowells@redhat.com>
      Cc: <linux-ext4@vger.kernel.org>
      Cc: Samuel Ortiz <samuel@sortiz.org>
      Cc: Patrick McHardy <kaber@trash.net>
      Cc: Takashi Iwai <tiwai@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e7d2860b
  26. 25 11月, 2009 1 次提交
  27. 18 11月, 2009 1 次提交
  28. 02 11月, 2009 1 次提交
  29. 03 10月, 2009 1 次提交
    • A
      x86: Simplify bound checks in the MTRR code · 11879ba5
      Arjan van de Ven 提交于
      The current bound checks for copy_from_user in the MTRR driver are
      not as obvious as they could be, and gcc agrees with that.
      
      This patch simplifies the boundary checks to the point that gcc can
      now prove to itself that the copy_from_user() is never going past
      its bounds.
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <20090926205150.30797709@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      11879ba5
  30. 21 9月, 2009 1 次提交
  31. 22 8月, 2009 2 次提交
    • H
      x86, mtrr: make mtrr_aps_delayed_init static bool · 5400743d
      H. Peter Anvin 提交于
      mtr_aps_delayed_init was declared u32 and made global, but it only
      ever takes boolean values and is only ever used in
      arch/x86/kernel/cpu/mtrr/main.c.  Declare it "static bool" and remove
      external references.
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      5400743d
    • S
      x86, pat/mtrr: Rendezvous all the cpus for MTRR/PAT init · d0af9eed
      Suresh Siddha 提交于
      SDM Vol 3a section titled "MTRR considerations in MP systems" specifies
      the need for synchronizing the logical cpu's while initializing/updating
      MTRR.
      
      Currently Linux kernel does the synchronization of all cpu's only when
      a single MTRR register is programmed/updated. During an AP online
      (during boot/cpu-online/resume)  where we initialize all the MTRR/PAT registers,
      we don't follow this synchronization algorithm.
      
      This can lead to scenarios where during a dynamic cpu online, that logical cpu
      is initializing MTRR/PAT with cache disabled (cr0.cd=1) etc while other logical
      HT sibling continue to run (also with cache disabled because of cr0.cd=1
      on its sibling).
      
      Starting from Westmere, VMX transitions with cr0.cd=1 don't work properly
      (because of some VMX performance optimizations) and the above scenario
      (with one logical cpu doing VMX activity and another logical cpu coming online)
      can result in system crash.
      
      Fix the MTRR initialization by doing rendezvous of all the cpus. During
      boot and resume, we delay the MTRR/PAT init for APs till all the
      logical cpu's come online and the rendezvous process at the end of AP's bringup,
      will initialize the MTRR/PAT for all AP's.
      
      For dynamic single cpu online, we synchronize all the logical cpus and
      do the MTRR/PAT init on the AP that is coming online.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      d0af9eed
  32. 05 7月, 2009 1 次提交
    • I
      x86: Further clean up of mtrr/generic.c · e3d0e692
      Ingo Molnar 提交于
      Yinghai noticed that i defined BIOS_BUG_MSG but added no
      usage for it. The usage is to clean up this turd in generic.c:
      
      			printk(KERN_WARNING "WARNING: BIOS bug: VAR MTRR %d "
      				"contains strange UC entry under 1M, check "
      				"with your system vendor!\n", i);
      
      Breaking printk lines in the middle looks ugly, is hard to read
      and breaks 'git grep'. Use the BIOS_BUG_MSG instead.
      
      Also complete the moving of structure definitions and variables
      to the top of the file.
      Reported-by: NYinghai Lu <yinghai@kernel.org>
      LKML-Reference: <20090703164225.GA21447@elte.hu>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e3d0e692
  33. 04 7月, 2009 2 次提交
    • J
      x86: Clean up mtrr/main.c · dbd51be0
      Jaswinder Singh Rajput 提交于
      Fix following trivial style problems:
      
        ERROR: trailing whitespace X 25
        WARNING: Use #include <linux/uaccess.h> instead of <asm/uaccess.h>
        WARNING: Use #include <linux/kvm_para.h> instead of <asm/kvm_para.h>
        ERROR: do not initialise externals to 0 or NULL X 2
        ERROR: "foo * bar" should be "foo *bar" X 5
        ERROR: do not use assignment in if condition X 2
        WARNING: line over 80 characters X 8
        ERROR: return is not a function, parentheses are not required
        WARNING: braces {} are not necessary for any arm of this statement
        ERROR: space required before the open parenthesis '(' X 2
        ERROR: open brace '{' following function declarations go on the next line
        ERROR: space required after that ',' (ctx:VxV) X 8
        ERROR: space required before the open parenthesis '(' X 3
        ERROR: else should follow close brace '}'
        WARNING: space prohibited between function name and open parenthesis '('
        WARNING: EXPORT_SYMBOL(foo); should immediately follow its function/variable X 2
      
      Also use pr_debug and pr_warning where possible.
      
      total: 50 errors, 14 warnings
      
      arch/x86/kernel/cpu/mtrr/main.o:
      
         text	   data	    bss	    dec	    hex	filename
         3668	    116	   4156	   7940	   1f04	main.o.before
         3668	    116	   4156	   7940	   1f04	main.o.after
      
      md5:
         e01af2fd28deef77c8d01e71acfbd365  main.o.before.asm
         e01af2fd28deef77c8d01e71acfbd365  main.o.after.asm
      Suggested-by: NAlan Cox <alan@lxorguk.ukuu.org.uk>
      Signed-off-by: NJaswinder Singh Rajput <jaswinderrajput@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      LKML-Reference: <20090703164225.GA21447@elte.hu>
      Cc: Avi Kivity <avi@redhat.com> # Avi, please have a look at the kvm_para.h bit
      [ More cleanups ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      dbd51be0
    • J
      x86: Clean up mtrr/state.c · 09b22c85
      Jaswinder Singh Rajput 提交于
      Fix:
      
        WARNING: Use #include <linux/io.h> instead of <asm/io.h>
        WARNING: line over 80 characters X 4
      
      arch/x86/kernel/cpu/mtrr/state.o:
      
         text	   data	    bss	    dec	    hex	filename
          864	      0	      0	    864	    360	state.o.before
          864	      0	      0	    864	    360	state.o.after
      
      md5:
         c5c4364b9aeac74d70111e1e49667a2c  state.o.before.asm
         c5c4364b9aeac74d70111e1e49667a2c  state.o.after.asm
      Suggested-by: NAlan Cox <alan@lxorguk.ukuu.org.uk>
      Signed-off-by: NJaswinder Singh Rajput <jaswinderrajput@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      LKML-Reference: <20090703164225.GA21447@elte.hu>
      [ More cleanups ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      09b22c85