1. 21 10月, 2010 4 次提交
    • K
      security: remove unused parameter from security_task_setscheduler() · b0ae1981
      KOSAKI Motohiro 提交于
      All security modules shouldn't change sched_param parameter of
      security_task_setscheduler().  This is not only meaningless, but also
      make a harmful result if caller pass a static variable.
      
      This patch remove policy and sched_param parameter from
      security_task_setscheduler() becuase none of security module is
      using it.
      
      Cc: James Morris <jmorris@namei.org>
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NJames Morris <jmorris@namei.org>
      b0ae1981
    • F
      x86, mm: Enable ARCH_DMA_ADDR_T_64BIT with X86_64 || HIGHMEM64G · 66f2b061
      FUJITA Tomonori 提交于
      Set CONFIG_ARCH_DMA_ADDR_T_64BIT when we set dma_addr_t to 64 bits in
      <asm/types.h>; this allows Kconfig decisions based on this property.
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      LKML-Reference: <201010202255.o9KMtZXu009370@imap1.linux-foundation.org>
      Acked-by: N"H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      66f2b061
    • S
      x86: Spread tlb flush vector between nodes · 93296720
      Shaohua Li 提交于
      Currently flush tlb vector allocation is based on below equation:
      	sender = smp_processor_id() % 8
      This isn't optimal, CPUs from different node can have the same vector, this
      causes a lot of lock contention. Instead, we can assign the same vectors to
      CPUs from the same node, while different node has different vectors. This has
      below advantages:
      a. if there is lock contention, the lock contention is between CPUs from one
      node. This should be much cheaper than the contention between nodes.
      b. completely avoid lock contention between nodes. This especially benefits
      kswapd, which is the biggest user of tlb flush, since kswapd sets its affinity
      to specific node.
      
      In my test, this could reduce > 20% CPU overhead in extreme case.The test
      machine has 4 nodes and each node has 16 CPUs. I then bind each node's kswapd
      to the first CPU of the node. I run a workload with 4 sequential mmap file
      read thread. The files are empty sparse file. This workload will trigger a
      lot of page reclaim and tlbflush. The kswapd bind is to easy trigger the
      extreme tlb flush lock contention because otherwise kswapd keeps migrating
      between CPUs of a node and I can't get stable result. Sure in real workload,
      we can't always see so big tlb flush lock contention, but it's possible.
      
      [ hpa: folded in fix from Eric Dumazet to use this_cpu_read() ]
      Signed-off-by: NShaohua Li <shaohua.li@intel.com>
      LKML-Reference: <1287544023.4571.8.camel@sli10-conroe.sh.intel.com>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      93296720
    • B
      x86, mm: Fix incorrect data type in vmalloc_sync_all() · f01f7c56
      Borislav Petkov 提交于
      arch/x86/mm/fault.c: In function 'vmalloc_sync_all':
      arch/x86/mm/fault.c:238: warning: assignment makes integer from pointer without a cast
      
      introduced by 617d34d9.
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      LKML-Reference: <20101020103642.GA3135@kryptos.osrc.amd.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      f01f7c56
  2. 20 10月, 2010 32 次提交
  3. 19 10月, 2010 4 次提交
    • H
      [S390] hardirq: remove pointless header file includes · 3f7edb16
      Heiko Carstens 提交于
      Remove a couple of pointless header file includes.
      Fixes a compile bug caused by header file include dependencies with
      "irq: Add tracepoint to softirq_raise" within linux-next.
      Reported-by: NSachin Sant <sachinp@in.ibm.com>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      [ cherry-picked from the s390 tree to fix "2bf2160d: irq: Add tracepoint to softirq_raise" ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3f7edb16
    • T
      [IA64] Move local_softirq_pending() definition · 3c4ea5b4
      Tony Luck 提交于
      Ugly #include dependencies. We need to have local_softirq_pending()
      defined before it gets used in <linux/interrupt.h>. But <asm/hardirq.h>
      provides the definition *after* this #include chain:
        <linux/irq.h>
          <asm/irq.h>
            <asm/hw_irq.h>
              <linux/interrupt.h>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      [ cherry-picked from the ia64 tree to fix "2bf2160d: irq: Add tracepoint to softirq_raise" ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3c4ea5b4
    • Y
      x86: ioapic: Call free_irte only if interrupt remapping enabled · 9717967c
      Yinghai Lu 提交于
      On a system that support intr-rempping when booting with "intremap=off"
      
      [  177.895501] BUG: unable to handle kernel NULL pointer dereference at 00000000000000f8
      [  177.913316] IP: [<ffffffff8145fc18>] free_irte+0x47/0xc0
      ...
      [  178.173326] Call Trace:
      [  178.173574]  [<ffffffff810515b4>] destroy_irq+0x3a/0x75
      [  178.192934]  [<ffffffff81051834>] arch_teardown_msi_irq+0xe/0x10
      [  178.193418]  [<ffffffff81458dc3>] arch_teardown_msi_irqs+0x56/0x7f
      [  178.213021]  [<ffffffff81458e79>] free_msi_irqs+0x8d/0xeb
      
      Call free_irte only when interrupt remapping is enabled.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      LKML-Reference: <4CBCB274.7010108@kernel.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      9717967c
    • P
      perf, powerpc: Fix power_pmu_event_init to not use event->ctx · 57fa7214
      Paul Mackerras 提交于
      Commit c3f00c70 ("perf: Separate find_get_context() from event
      initialization") changed the generic perf_event code to call
      perf_event_alloc, which calls the arch-specific event_init code,
      before looking up the context for the new event.  Unfortunately,
      power_pmu_event_init uses event->ctx->task to see whether the
      new event is a per-task event or a system-wide event, and thus
      crashes since event->ctx is NULL at the point where
      power_pmu_event_init gets called.
      
      (The reason it needs to know whether it is a per-task event is
      because there are some hardware events on Power systems which
      only count when the processor is not idle, and there are some
      fixed-function counters which count such events.  For example,
      the "run cycles" event counts cycles when the processor is not
      idle.  If the user asks to count cycles, we can use "run cycles"
      if this is a per-task event, since the processor is running when
      the task is running, by definition.  We can't use "run cycles"
      if the user asks for "cycles" on a system-wide counter.)
      
      Fortunately the information we need is in the
      event->attach_state field, so we just use that instead.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <20101019055535.GA10398@drongo>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Reported-by: NAlexey Kardashevskiy <aik@au1.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      57fa7214