1. 28 6月, 2016 2 次提交
  2. 26 5月, 2016 5 次提交
  3. 25 5月, 2016 1 次提交
  4. 21 5月, 2016 3 次提交
  5. 18 5月, 2016 1 次提交
  6. 17 5月, 2016 3 次提交
    • A
      perf core: Separate accounting of contexts and real addresses in a stack trace · c85b0334
      Arnaldo Carvalho de Melo 提交于
      The perf_sample->ip_callchain->nr value includes all the entries in the
      ip_callchain->ip[] array, real addresses and PERF_CONTEXT_{KERNEL,USER,etc},
      while what the user expects is that what is in the kernel.perf_event_max_stack
      sysctl or in the upcoming per event perf_event_attr.sample_max_stack knob be
      honoured in terms of IP addresses in the stack trace.
      
      So allocate a bunch of extra entries for contexts, and do the accounting
      via perf_callchain_entry_ctx struct members.
      
      A new sysctl, kernel.perf_event_max_contexts_per_stack is also
      introduced for investigating possible bugs in the callchain
      implementation by some arch.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: He Kuang <hekuang@huawei.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Milian Wolff <milian.wolff@kdab.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: Wang Nan <wangnan0@huawei.com>
      Cc: Zefan Li <lizefan@huawei.com>
      Link: http://lkml.kernel.org/n/tip-3b4wnqk340c4sg4gwkfdi9yk@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      c85b0334
    • S
      net: cls_u32: Add support for skip-sw flag to tc u32 classifier. · d34e3e18
      Samudrala, Sridhar 提交于
      On devices that support TC U32 offloads, this flag enables a filter to be
      added only to HW. skip-sw and skip-hw are mutually exclusive flags. By
      default without any flags, the filter is added to both HW and SW, but no
      error checks are done in case of failure to add to HW. With skip-sw,
      failure to add to HW is treated as an error.
      
      Here is a sample script that adds 2 filters, one with skip-sw and the other
      with skip-hw flag.
      
         # add ingress qdisc
         tc qdisc add dev p4p1 ingress
      
         # enable hw tc offload.
         ethtool -K p4p1 hw-tc-offload on
      
         # add u32 filter with skip-sw flag.
         tc filter add dev p4p1 parent ffff: protocol ip prio 99 \
            handle 800:0:1 u32 ht 800: flowid 800:1 \
            skip-sw \
            match ip src 192.168.1.0/24 \
            action drop
      
         # add u32 filter with skip-hw flag.
         tc filter add dev p4p1 parent ffff: protocol ip prio 99 \
            handle 800:0:2 u32 ht 800: flowid 800:2 \
            skip-hw \
            match ip src 192.168.2.0/24 \
            action drop
      Signed-off-by: NSridhar Samudrala <sridhar.samudrala@intel.com>
      Acked-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d34e3e18
    • S
  7. 14 5月, 2016 1 次提交
  8. 13 5月, 2016 23 次提交
  9. 12 5月, 2016 1 次提交
    • G
      kvm: introduce KVM_MAX_VCPU_ID · 0b1b1dfd
      Greg Kurz 提交于
      The KVM_MAX_VCPUS define provides the maximum number of vCPUs per guest, and
      also the upper limit for vCPU ids. This is okay for all archs except PowerPC
      which can have higher ids, depending on the cpu/core/thread topology. In the
      worst case (single threaded guest, host with 8 threads per core), it limits
      the maximum number of vCPUS to KVM_MAX_VCPUS / 8.
      
      This patch separates the vCPU numbering from the total number of vCPUs, with
      the introduction of KVM_MAX_VCPU_ID, as the maximal valid value for vCPU ids
      plus one.
      
      The corresponding KVM_CAP_MAX_VCPU_ID allows userspace to validate vCPU ids
      before passing them to KVM_CREATE_VCPU.
      
      This patch only implements KVM_MAX_VCPU_ID with a specific value for PowerPC.
      Other archs continue to return KVM_MAX_VCPUS instead.
      Suggested-by: NRadim Krcmar <rkrcmar@redhat.com>
      Signed-off-by: NGreg Kurz <gkurz@linux.vnet.ibm.com>
      Reviewed-by: NCornelia Huck <cornelia.huck@de.ibm.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      0b1b1dfd