1. 03 8月, 2014 1 次提交
    • A
      net: filter: split 'struct sk_filter' into socket and bpf parts · 7ae457c1
      Alexei Starovoitov 提交于
      clean up names related to socket filtering and bpf in the following way:
      - everything that deals with sockets keeps 'sk_*' prefix
      - everything that is pure BPF is changed to 'bpf_*' prefix
      
      split 'struct sk_filter' into
      struct sk_filter {
      	atomic_t        refcnt;
      	struct rcu_head rcu;
      	struct bpf_prog *prog;
      };
      and
      struct bpf_prog {
              u32                     jited:1,
                                      len:31;
              struct sock_fprog_kern  *orig_prog;
              unsigned int            (*bpf_func)(const struct sk_buff *skb,
                                                  const struct bpf_insn *filter);
              union {
                      struct sock_filter      insns[0];
                      struct bpf_insn         insnsi[0];
                      struct work_struct      work;
              };
      };
      so that 'struct bpf_prog' can be used independent of sockets and cleans up
      'unattached' bpf use cases
      
      split SK_RUN_FILTER macro into:
          SK_RUN_FILTER to be used with 'struct sk_filter *' and
          BPF_PROG_RUN to be used with 'struct bpf_prog *'
      
      __sk_filter_release(struct sk_filter *) gains
      __bpf_prog_release(struct bpf_prog *) helper function
      
      also perform related renames for the functions that work
      with 'struct bpf_prog *', since they're on the same lines:
      
      sk_filter_size -> bpf_prog_size
      sk_filter_select_runtime -> bpf_prog_select_runtime
      sk_filter_free -> bpf_prog_free
      sk_unattached_filter_create -> bpf_prog_create
      sk_unattached_filter_destroy -> bpf_prog_destroy
      sk_store_orig_filter -> bpf_prog_store_orig_filter
      sk_release_orig_filter -> bpf_release_orig_filter
      __sk_migrate_filter -> bpf_migrate_filter
      __sk_prepare_filter -> bpf_prepare_filter
      
      API for attaching classic BPF to a socket stays the same:
      sk_attach_filter(prog, struct sock *)/sk_detach_filter(struct sock *)
      and SK_RUN_FILTER(struct sk_filter *, ctx) to execute a program
      which is used by sockets, tun, af_packet
      
      API for 'unattached' BPF programs becomes:
      bpf_prog_create(struct bpf_prog **)/bpf_prog_destroy(struct bpf_prog *)
      and BPF_PROG_RUN(struct bpf_prog *, ctx) to execute a program
      which is used by isdn, ppp, team, seccomp, ptp, xt_bpf, cls_bpf, test_bpf
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7ae457c1
  2. 31 7月, 2014 1 次提交
    • H
      random32: mix in entropy from core to late initcall · 4ada97ab
      Hannes Frederic Sowa 提交于
      Currently, we have a 3-stage seeding process in prandom():
      
      Phase 1 is from the early actual initialization of prandom()
      subsystem which happens during core_initcall() and remains
      most likely until the beginning of late_initcall() phase.
      Here, the system might not have enough entropy available
      for seeding with strong randomness from the random driver.
      That means, we currently have a 32bit weak LCG() seeding
      the PRNG status register 1 and mixing that successively
      into the other 3 registers just to get it up and running.
      
      Phase 2 starts with late_initcall() phase resp. when the
      random driver has initialized its non-blocking pool with
      enough entropy. At that time, we throw away *all* inner
      state from its 4 registers and do a full reseed with strong
      randomness.
      
      Phase 3 starts right after that and does a periodic reseed
      with random slack of status register 1 by a strong random
      source again.
      
      A problem in phase 1 is that during bootup data structures
      can be initialized, e.g. on module load time, and thus access
      a weakly seeded prandom and are never changed for the rest
      of their live-time, thus carrying along the results from a
      week seed. Lets make sure that current but also future users
      access a possibly better early seeded prandom.
      
      This patch therefore improves phase 1 by trying to make it
      more 'unpredictable' through mixing in seed from a possible
      hardware source. Now, the mix-in xors inner state with the
      outcome of either of the two functions arch_get_random_{,seed}_int(),
      preferably arch_get_random_seed_int() as it likely represents
      a non-deterministic random bit generator in hw rather than
      a cryptographically secure PRNG in hw. However, not all might
      have the first one, so we use the PRNG as a fallback if
      available. As we xor the seed into the current state, the
      worst case would be that a hardware source could be unverifiable
      compromised or backdoored. In that case nevertheless it
      would be as good as our original early seeding function
      prandom_seed_very_weak() since we mix through xor which is
      entropy preserving.
      
      Joint work with Daniel Borkmann.
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Signed-off-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4ada97ab
  3. 25 7月, 2014 1 次提交
  4. 21 7月, 2014 1 次提交
  5. 04 7月, 2014 1 次提交
  6. 03 7月, 2014 1 次提交
  7. 28 6月, 2014 2 次提交
  8. 26 6月, 2014 4 次提交
  9. 24 6月, 2014 3 次提交
  10. 21 6月, 2014 1 次提交
    • J
      swiotlb: don't assume PA 0 is invalid · 8e0629c1
      Jan Beulich 提交于
      In 2.6.29 io_tlb_orig_addr[] got converted from storing virtual addresses
      to storing physical ones. While checking virtual addresses against NULL
      is a legitimate thing to catch invalid entries, checking physical ones
      against zero isn't: There's no guarantee that PFN 0 is reserved on a
      particular platform.
      
      Since it is unclear whether the check in swiotlb_tbl_unmap_single() is
      actually needed, retain it but check against a guaranteed invalid physical
      address. This requires setting up the array in a suitable fashion. And
      since the original code failed to invalidate array entries when regions
      get unmapped, this is being fixed at once along with adding a similar
      check to swiotlb_tbl_sync_single().
      
      Obviously the less intrusive change would be to simply drop the check in
      swiotlb_tbl_unmap_single().
      Signed-off-by: NJan Beulich <jbeulich@suse.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      8e0629c1
  11. 12 6月, 2014 1 次提交
    • A
      cpumask: Utility function to set n'th cpu - local cpu first · da91309e
      Amir Vadai 提交于
      This function sets the n'th cpu - local cpu's first.
      For example: in a 16 cores server with even cpu's local, will get the
      following values:
      cpumask_set_cpu_local_first(0, numa, cpumask) => cpu 0 is set
      cpumask_set_cpu_local_first(1, numa, cpumask) => cpu 2 is set
      ...
      cpumask_set_cpu_local_first(7, numa, cpumask) => cpu 14 is set
      cpumask_set_cpu_local_first(8, numa, cpumask) => cpu 1 is set
      cpumask_set_cpu_local_first(9, numa, cpumask) => cpu 3 is set
      ...
      cpumask_set_cpu_local_first(15, numa, cpumask) => cpu 15 is set
      
      Curently this function will be used by multi queue networking devices to
      calculate the irq affinity mask, such that as many local cpu's as
      possible will be utilized to handle the mq device irq's.
      Signed-off-by: NAmir Vadai <amirv@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      da91309e
  12. 11 6月, 2014 1 次提交
  13. 07 6月, 2014 7 次提交
  14. 05 6月, 2014 15 次提交