1. 12 11月, 2010 2 次提交
  2. 09 11月, 2010 1 次提交
    • F
      x86/mrst: Add SFI platform device parsing code · 1da4b1c6
      Feng Tang 提交于
      SFI provides a series of tables. These describe the platform devices present
      including SPI and I²C devices, as well as various sensors, keypads and other
      glue as well as interfaces provided via the SCU IPC mechanism (intel_scu_ipc.c)
      
      This patch is a merge of the core elements and relevant fixes from the
      Intel development code by Feng, Alek, myself into a single coherent patch
      for upstream submission.
      
      It provides the needed infrastructure to register I2C, SPI and platform devices
      described by the tables, as well as handlers for some of the hardware already
      supported in kernel. The 0.8 firmware also provides GPIO tables.
      
      Devices are created at boot time or if they are SCU dependant at the point an
      SCU is discovered. The existing Linux device mechanisms will then handle the
      device binding. At an abstract level this is an SFI to Linux device translator.
      
      Device/platform specific setup/glue is in this file. This is done so that the
      drivers for the generic I²C and SPI bus devices remain cross platform as they
      should.
      
      (Updated from RFC version to correct the emc1403 name used by the firmware
       and a wrongly used #define)
      Signed-off-by: NAlek Du <alek.du@linux.intel.com>
      LKML-Reference: <20101109112158.20013.6158.stgit@localhost.localdomain>
      [Clean ups, removal of 0.7 support]
      Signed-off-by: NFeng Tang <feng.tang@linux.intel.com>
      [Clean ups]
      Signed-off-by: NAlan Cox <alan@linux.intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      1da4b1c6
  3. 21 10月, 2010 1 次提交
  4. 19 10月, 2010 2 次提交
  5. 18 10月, 2010 1 次提交
  6. 16 10月, 2010 1 次提交
  7. 15 10月, 2010 3 次提交
    • R
      x86, olpc: XO-1 uses/depends on PCI · 9e9006e9
      Randy Dunlap 提交于
      olpc-xo1 uses pci_*() interfaces so it should depend on PCI.
      
      Otherwise we get build failure like:
      
       arch/x86/kernel/olpc-xo1.c:65: error: implicit declaration of function 'pci_enable_device_io'
       arch/x86/kernel/olpc-xo1.c:71: error: implicit declaration of function 'pci_request_region'
       arch/x86/kernel/olpc-xo1.c:80: error: implicit declaration of function 'pci_release_region'
      Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com>
      Acked-by: NDaniel Drake <dsd@laptop.org>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      LKML-Reference: <20101014101313.adf7eb2a.randy.dunlap@oracle.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9e9006e9
    • S
      ftrace: Rename config option HAVE_C_MCOUNT_RECORD to HAVE_C_RECORDMCOUNT · cf4db259
      Steven Rostedt 提交于
      The config option used by archs to let the build system know that
      the C version of the recordmcount works for said arch is currently
      called HAVE_C_MCOUNT_RECORD which enables BUILD_C_RECORDMCOUNT. To
      be more consistent with the name that all archs may use, it has been
      renamed to HAVE_C_RECORDMCOUNT. This will be less confusing since
      we are building a C recordmcount and not a mcount_record.
      Suggested-by: NIngo Molnar <mingo@elte.hu>
      Cc: <linux-arch@vger.kernel.org>
      Cc: Michal Marek <mmarek@suse.cz>
      Cc: linux-kbuild@vger.kernel.org
      Cc: John Reiser <jreiser@bitwagon.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      cf4db259
    • S
      ftrace/x86: Add support for C version of recordmcount · 72441cb1
      Steven Rostedt 提交于
      This patch adds the support for the C version of recordmcount and
      compile times show ~ 12% improvement.
      
      After verifying this works, other archs can add:
      
       HAVE_C_MCOUNT_RECORD
      
      in its Kconfig and it will use the C version of recordmcount
      instead of the perl version.
      
      Cc: <linux-arch@vger.kernel.org>
      Cc: Michal Marek <mmarek@suse.cz>
      Cc: linux-kbuild@vger.kernel.org
      Cc: John Reiser <jreiser@bitwagon.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      72441cb1
  8. 14 10月, 2010 2 次提交
  9. 13 10月, 2010 1 次提交
    • D
      x86, olpc: Add XO-1 poweroff support · bf1ebf00
      Daniel Drake 提交于
      Add a pm_power_off handler for the OLPC XO-1 laptop.
      
      The driver can be built modular and follows the behaviour of the
      APM driver, setting pm_power_off to NULL on unload. However, the
      ability to unload the module will probably be removed (with a simple
      __module_get(THIS_MODULE)) if/when XO-1 suspend/resume support is
      added to this file at a later date.
      Signed-off-by: NDaniel Drake <dsd@laptop.org>
      LKML-Reference: <20101010094032.9AE669D401B@zog.reactivated.net>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      bf1ebf00
  10. 12 10月, 2010 1 次提交
  11. 04 10月, 2010 1 次提交
  12. 24 9月, 2010 2 次提交
  13. 23 9月, 2010 2 次提交
  14. 21 9月, 2010 1 次提交
  15. 20 9月, 2010 1 次提交
  16. 28 8月, 2010 2 次提交
    • Y
      x86: Remove old bootmem code · 774ea0bc
      Yinghai Lu 提交于
      Requested by Ingo, Thomas and HPA.
      
      The old bootmem code is no longer necessary, and the transition is
      complete.  Remove it.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      774ea0bc
    • Y
      x86: Use memblock to replace early_res · 72d7c3b3
      Yinghai Lu 提交于
      1. replace find_e820_area with memblock_find_in_range
      2. replace reserve_early with memblock_x86_reserve_range
      3. replace free_early with memblock_x86_free_range.
      4. NO_BOOTMEM will switch to use memblock too.
      5. use _e820, _early wrap in the patch, in following patch, will
         replace them all
      6. because memblock_x86_free_range support partial free, we can remove some special care
      7. Need to make sure that memblock_find_in_range() is called after memblock_x86_fill()
         so adjust some calling later in setup.c::setup_arch()
         -- corruption_check and mptable_update
      
      -v2: Move reserve_brk() early
          Before fill_memblock_area, to avoid overlap between brk and memblock_find_in_range()
          that could happen We have more then 128 RAM entry in E820 tables, and
          memblock_x86_fill() could use memblock_find_in_range() to find a new place for
          memblock.memory.region array.
          and We don't need to use extend_brk() after fill_memblock_area()
          So move reserve_brk() early before fill_memblock_area().
      -v3: Move find_smp_config early
          To make sure memblock_find_in_range not find wrong place, if BIOS doesn't put mptable
          in right place.
      -v4: Treat RESERVED_KERN as RAM in memblock.memory. and they are already in
          memblock.reserved already..
          use __NOT_KEEP_MEMBLOCK to make sure memblock related code could be freed later.
      -v5: Generic version __memblock_find_in_range() is going from high to low, and for 32bit
          active_region for 32bit does include high pages
          need to replace the limit with memblock.default_alloc_limit, aka get_max_mapped()
      -v6: Use current_limit instead
      -v7: check with MEMBLOCK_ERROR instead of -1ULL or -1L
      -v8: Set memblock_can_resize early to handle EFI with more RAM entries
      -v9: update after kmemleak changes in mainline
      Suggested-by: NDavid S. Miller <davem@davemloft.net>
      Suggested-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Suggested-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      72d7c3b3
  17. 26 8月, 2010 1 次提交
  18. 25 8月, 2010 1 次提交
  19. 24 8月, 2010 1 次提交
    • A
      x86, vmware: Remove deprecated VMI kernel support · 9863c90f
      Alok Kataria 提交于
      With the recent innovations in CPU hardware acceleration technologies
      from Intel and AMD, VMware ran a few experiments to compare these
      techniques to guest paravirtualization technique on VMware's platform.
      These hardware assisted virtualization techniques have outperformed the
      performance benefits provided by VMI in most of the workloads. VMware
      expects that these hardware features will be ubiquitous in a couple of
      years, as a result, VMware has started a phased retirement of this
      feature from the hypervisor.
      
      Please note that VMI has always been an optimization and non-VMI kernels
      still work fine on VMware's platform.
      Latest versions of VMware's product which support VMI are,
      Workstation 7.0 and VSphere 4.0 on ESX side, future maintainence
      releases for these products will continue supporting VMI.
      
      For more details about VMI retirement take a look at this,
      http://blogs.vmware.com/guestosguide/2009/09/vmi-retirement.html
      
      This feature removal was scheduled for 2.6.37 back in September 2009.
      Signed-off-by: NAlok N Kataria <akataria@vmware.com>
      LKML-Reference: <1282600151.19396.22.camel@ank32.eng.vmware.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      9863c90f
  20. 22 8月, 2010 1 次提交
  21. 20 8月, 2010 1 次提交
    • B
      x86, hotplug: Serialize CPU hotplug to avoid bringup concurrency issues · d7c53c9e
      Borislav Petkov 提交于
      When testing cpu hotplug code on 32-bit we kept hitting the "CPU%d:
      Stuck ??" message due to multiple cores concurrently accessing the
      cpu_callin_mask, among others.
      
      Since these codepaths are not protected from concurrent access due to
      the fact that there's no sane reason for making an already complex
      code unnecessarily more complex - we hit the issue only when insanely
      switching cores off- and online - serialize hotplugging cores on the
      sysfs level and be done with it.
      
      [ v2.1: fix !HOTPLUG_CPU build ]
      
      Cc: <stable@kernel.org>
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      LKML-Reference: <20100819181029.GC17171@aftab>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      d7c53c9e
  22. 27 7月, 2010 1 次提交
  23. 19 6月, 2010 1 次提交
    • A
      x86, olpc: Add support for calling into OpenFirmware · fd699c76
      Andres Salomon 提交于
      Add support for saving OFW's cif, and later calling into it to run OFW
      commands.  OFW remains resident in memory, living within virtual range
      0xff800000 - 0xffc00000.  A single page directory entry points to the
      pgdir that OFW actually uses, so rather than saving the entire page
      table, we grab and install that one entry permanently in the kernel's
      page table.
      
      This is currently only used by the OLPC XO.  Note that this particular
      calling convention breaks PAE and PAT, and so cannot be used on newer
      x86 hardware.
      Signed-off-by: NAndres Salomon <dilinger@queued.net>
      LKML-Reference: <20100618174653.7755a39a@dev.queued.net>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      fd699c76
  24. 28 5月, 2010 3 次提交
  25. 22 5月, 2010 1 次提交
  26. 16 5月, 2010 1 次提交
    • F
      lockup_detector: Adapt CONFIG_PERF_EVENT_NMI to other archs · c01d4323
      Frederic Weisbecker 提交于
      CONFIG_PERF_EVENT_NMI is something that need to be enabled from the
      arch. This is fine on x86 as PERF_EVENTS is builtin but if other
      archs select it, they will need to handle the PERF_EVENTS dependency.
      
      Instead, handle the dependency in the generic layer:
      
      - archs need to tell what they support through HAVE_PERF_EVENTS_NMI
      - Enable magically PERF_EVENTS_NMI if we have PERF_EVENTS and
        HAVE_PERF_EVENTS_NMI.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      c01d4323
  27. 01 5月, 2010 1 次提交
    • F
      hw-breakpoints: Separate constraint space for data and instruction breakpoints · 0102752e
      Frederic Weisbecker 提交于
      There are two outstanding fashions for archs to implement hardware
      breakpoints.
      
      The first is to separate breakpoint address pattern definition
      space between data and instruction breakpoints. We then have
      typically distinct instruction address breakpoint registers
      and data address breakpoint registers, delivered with
      separate control registers for data and instruction breakpoints
      as well. This is the case of PowerPc and ARM for example.
      
      The second consists in having merged breakpoint address space
      definition between data and instruction breakpoint. Address
      registers can host either instruction or data address and
      the access mode for the breakpoint is defined in a control
      register. This is the case of x86 and Super H.
      
      This patch adds a new CONFIG_HAVE_MIXED_BREAKPOINTS_REGS config
      that archs can select if they belong to the second case. Those
      will have their slot allocation merged for instructions and
      data breakpoints.
      
      The others will have a separate slot tracking between data and
      instruction breakpoints.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NPaul Mundt <lethal@linux-sh.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Cc: K. Prasad <prasad@linux.vnet.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      0102752e
  28. 29 4月, 2010 1 次提交
  29. 07 4月, 2010 1 次提交
    • B
      x86: Add optimized popcnt variants · d61931d8
      Borislav Petkov 提交于
      Add support for the hardware version of the Hamming weight function,
      popcnt, present in CPUs which advertize it under CPUID, Function
      0x0000_0001_ECX[23]. On CPUs which don't support it, we fallback to the
      default lib/hweight.c sw versions.
      
      A synthetic benchmark comparing popcnt with __sw_hweight64 showed almost
      a 3x speedup on a F10h machine.
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      LKML-Reference: <20100318112015.GC11152@aftab>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      d61931d8
  30. 03 4月, 2010 1 次提交
    • D
      x86: Increase CONFIG_NODES_SHIFT max to 10 · 51591e31
      David Rientjes 提交于
      Some larger systems require more than 512 nodes, so increase the
      maximum CONFIG_NODES_SHIFT to 10 for a new max of 1024 nodes.
      
      This was tested with numa=fake=64M on systems with more than
      64GB of RAM. A total of 1022 nodes were initialized.
      
      Successfully builds with no additional warnings on x86_64
      allyesconfig.
      
      ( No effect on any existing config. Newly enabled CONFIG_MAXSMP=y
        will see the new default. )
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      LKML-Reference: <alpine.DEB.2.00.1003251538060.8589@chino.kir.corp.google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      51591e31