1. 01 5月, 2009 10 次提交
  2. 30 4月, 2009 18 次提交
  3. 29 4月, 2009 12 次提交
    • L
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6 · 3dacbdad
      Linus Torvalds 提交于
      * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6: (24 commits)
        e100: do not go D3 in shutdown unless system is powering off
        netfilter: revised locking for x_tables
        Bluetooth: Fix connection establishment with low security requirement
        Bluetooth: Add different pairing timeout for Legacy Pairing
        Bluetooth: Ensure that HCI sysfs add/del is preempt safe
        net: Avoid extra wakeups of threads blocked in wait_for_packet()
        net: Fix typo in net_device_ops description.
        ipv4: Limit size of route cache hash table
        Add reference to CAPI 2.0 standard
        Documentation/isdn/INTERFACE.CAPI
        update Documentation/isdn/00-INDEX
        ixgbe: Fix WoL functionality for 82599 KX4 devices
        veth: prevent oops caused by netdev destructor
        xfrm: wrong hash value for temporary SA
        forcedeth: tx timeout fix
        net: Fix LL_MAX_HEADER for CONFIG_TR_MODULE
        mlx4_en: Handle page allocation failure during receive
        mlx4_en: Fix cleanup flow on cq activation
        vlan: update vlan carrier state for admin up/down
        netfilter: xt_recent: fix stack overread in compat code
        ...
      3dacbdad
    • P
      perf_counter: powerpc: allow use of limited-function counters · ab7ef2e5
      Paul Mackerras 提交于
      POWER5+ and POWER6 have two hardware counters with limited functionality:
      PMC5 counts instructions completed in run state and PMC6 counts cycles
      in run state.  (Run state is the state when a hardware RUN bit is 1;
      the idle task clears RUN while waiting for work to do and sets it when
      there is work to do.)
      
      These counters can't be written to by the kernel, can't generate
      interrupts, and don't obey the freeze conditions.  That means we can
      only use them for per-task counters (where we know we'll always be in
      run state; we can't put a per-task counter on an idle task), and only
      if we don't want interrupts and we do want to count in all processor
      modes.
      
      Obviously some counters can't go on a limited hardware counter, but there
      are also situations where we can only put a counter on a limited hardware
      counter - if there are already counters on that exclude some processor
      modes and we want to put on a per-task cycle or instruction counter that
      doesn't exclude any processor mode, it could go on if it can use a
      limited hardware counter.
      
      To keep track of these constraints, this adds a flags argument to the
      processor-specific get_alternatives() functions, with three bits defined:
      one to say that we can accept alternative event codes that go on limited
      counters, one to say we only want alternatives on limited counters, and
      one to say that this is a per-task counter and therefore events that are
      gated by run state are equivalent to those that aren't (e.g. a "cycles"
      event is equivalent to a "cycles in run state" event).  These flags
      are computed for each counter and stored in the counter->hw.counter_base
      field (slightly wonky name for what it does, but it was an existing
      unused field).
      
      Since the limited counters don't freeze when we freeze the other counters,
      we need some special handling to avoid getting skew between things counted
      on the limited counters and those counted on normal counters.  To minimize
      this skew, if we are using any limited counters, we read PMC5 and PMC6
      immediately after setting and clearing the freeze bit.  This is done in
      a single asm in the new write_mmcr0() function.
      
      The code here is specific to PMC5 and PMC6 being the limited hardware
      counters.  Being more general (e.g. having a bitmap of limited hardware
      counter numbers) would have meant more complex code to read the limited
      counters when freezing and unfreezing the normal counters, with
      conditional branches, which would have increased the skew.  Since it
      isn't necessary for the code to be more general at this stage, it isn't.
      
      This also extends the back-ends for POWER5+ and POWER6 to be able to
      handle up to 6 counters rather than the 4 they previously handled.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Robert Richter <robert.richter@amd.com>
      LKML-Reference: <18936.19035.163066.892208@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ab7ef2e5
    • I
      perf_counter: add/update copyrights · 98144511
      Ingo Molnar 提交于
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      98144511
    • R
      perf_counter: update 'perf top' documentation · 38105f02
      Robert Richter 提交于
      The documentation about the perf-top build was outdated after
      perfstat has been implemented. This updates it.
      
      [ Impact: update documentation ]
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1241002046-8832-30-git-send-email-robert.richter@amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      38105f02
    • R
      perf_counter, x86: remove unused function argument in intel_pmu_get_status() · 19d84dab
      Robert Richter 提交于
      The mask argument is unused and thus can be removed.
      
      [ Impact: cleanup ]
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1241002046-8832-29-git-send-email-robert.richter@amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      19d84dab
    • R
      perf_counter, x86: remove vendor check in fixed_mode_idx() · ef7b3e09
      Robert Richter 提交于
      The function fixed_mode_idx() is used generically. Now it checks the
      num_counters_fixed value instead of the vendor to decide if fixed
      counters are present.
      
      [ Impact: generalize code ]
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1241002046-8832-28-git-send-email-robert.richter@amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ef7b3e09
    • R
      perf_counter, x86: introduce max_period variable · c619b8ff
      Robert Richter 提交于
      In x86 pmus the allowed counter period to programm differs. This
      introduces a max_period value and allows the generic implementation
      for all models to check the max period.
      
      [ Impact: generalize code ]
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1241002046-8832-27-git-send-email-robert.richter@amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c619b8ff
    • R
      perf_counter, x86: return raw count with x86_perf_counter_update() · 4b7bfd0d
      Robert Richter 提交于
      To check on AMD cpus if a counter overflows, the upper bit of the raw
      counter value must be checked. This value is already internally
      available in x86_perf_counter_update(). Now, the value is returned so
      that it can be used directly to check for overflows.
      
      [ Impact: micro-optimization ]
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1241002046-8832-26-git-send-email-robert.richter@amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4b7bfd0d
    • R
      perf_counter, x86: implement the interrupt handler for AMD cpus · a29aa8a7
      Robert Richter 提交于
      This patch implements the interrupt handler for AMD performance
      counters. In difference to the Intel pmu, there is no single status
      register and also there are no fixed counters. This makes the handler
      very different and it is useful to make the handler vendor
      specific. To check if a counter is overflowed the upper bit of the
      counter is checked. Only counters where the active bit is set are
      checked.
      
      With this patch throttling is enabled for AMD performance counters.
      
      This patch also reenables Linux performance counters on AMD cpus.
      
      [ Impact: re-enable perfcounters on AMD CPUs ]
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1241002046-8832-25-git-send-email-robert.richter@amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a29aa8a7
    • R
      perf_counter, x86: change and remove pmu initialization checks · 85cf9dba
      Robert Richter 提交于
      Some functions are only called if the pmu was proper initialized. That
      initalization checks can be removed. The way to check initialization
      changed too. Now, the pointer to the interrupt handler is checked. If
      it exists the pmu is initialized. This also removes a static variable
      and uses struct x86_pmu as only data source for the check.
      
      [ Impact: simplify code ]
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1241002046-8832-24-git-send-email-robert.richter@amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      85cf9dba
    • R
      perf_counter, x86: rework counter disable functions · d4369891
      Robert Richter 提交于
      As for the enable function, this patch reworks the disable functions
      and introduces x86_pmu_disable_counter(). The internal function i/f in
      struct x86_pmu changed too.
      
      [ Impact: refactor and generalize code ]
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1241002046-8832-23-git-send-email-robert.richter@amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d4369891
    • R
      perf_counter, x86: rework counter enable functions · 7c90cc45
      Robert Richter 提交于
      There is vendor specific code in generic x86 code, and there is vendor
      specific code that could be generic. This patch introduces
      x86_pmu_enable_counter() for x86 generic code. Fixed counter code for
      Intel is moved to Intel only functions. In the end, checks and calls
      via function pointers were reduced to the necessary. Also, the
      internal function i/f changed.
      
      [ Impact: refactor and generalize code ]
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1241002046-8832-22-git-send-email-robert.richter@amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7c90cc45