1. 07 8月, 2014 30 次提交
  2. 03 8月, 2014 3 次提交
    • T
      lib: Resizable, Scalable, Concurrent Hash Table · 7e1e7763
      Thomas Graf 提交于
      Generic implementation of a resizable, scalable, concurrent hash table
      based on [0]. The implementation supports both, fixed size keys specified
      via an offset and length, or arbitrary keys via own hash and compare
      functions.
      
      Lookups are lockless and protected as RCU read side critical sections.
      Automatic growing/shrinking based on user configurable watermarks is
      available while allowing concurrent lookups to take place.
      
      Objects to be hashed must include a struct rhash_head. The reason for not
      using the existing struct hlist_head is that the expansion and shrinking
      will have two buckets point to a single entry which would lead in obscure
      reverse chaining behaviour.
      
      Code includes a boot selftest if CONFIG_TEST_RHASHTABLE is defined.
      
      [0] https://www.usenix.org/legacy/event/atc11/tech/final_files/Triplett.pdfSigned-off-by: NThomas Graf <tgraf@suug.ch>
      Reviewed-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7e1e7763
    • S
      iovec: make sure the caller actually wants anything in memcpy_fromiovecend · 06ebb06d
      Sasha Levin 提交于
      Check for cases when the caller requests 0 bytes instead of running off
      and dereferencing potentially invalid iovecs.
      Signed-off-by: NSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      06ebb06d
    • A
      net: filter: split 'struct sk_filter' into socket and bpf parts · 7ae457c1
      Alexei Starovoitov 提交于
      clean up names related to socket filtering and bpf in the following way:
      - everything that deals with sockets keeps 'sk_*' prefix
      - everything that is pure BPF is changed to 'bpf_*' prefix
      
      split 'struct sk_filter' into
      struct sk_filter {
      	atomic_t        refcnt;
      	struct rcu_head rcu;
      	struct bpf_prog *prog;
      };
      and
      struct bpf_prog {
              u32                     jited:1,
                                      len:31;
              struct sock_fprog_kern  *orig_prog;
              unsigned int            (*bpf_func)(const struct sk_buff *skb,
                                                  const struct bpf_insn *filter);
              union {
                      struct sock_filter      insns[0];
                      struct bpf_insn         insnsi[0];
                      struct work_struct      work;
              };
      };
      so that 'struct bpf_prog' can be used independent of sockets and cleans up
      'unattached' bpf use cases
      
      split SK_RUN_FILTER macro into:
          SK_RUN_FILTER to be used with 'struct sk_filter *' and
          BPF_PROG_RUN to be used with 'struct bpf_prog *'
      
      __sk_filter_release(struct sk_filter *) gains
      __bpf_prog_release(struct bpf_prog *) helper function
      
      also perform related renames for the functions that work
      with 'struct bpf_prog *', since they're on the same lines:
      
      sk_filter_size -> bpf_prog_size
      sk_filter_select_runtime -> bpf_prog_select_runtime
      sk_filter_free -> bpf_prog_free
      sk_unattached_filter_create -> bpf_prog_create
      sk_unattached_filter_destroy -> bpf_prog_destroy
      sk_store_orig_filter -> bpf_prog_store_orig_filter
      sk_release_orig_filter -> bpf_release_orig_filter
      __sk_migrate_filter -> bpf_migrate_filter
      __sk_prepare_filter -> bpf_prepare_filter
      
      API for attaching classic BPF to a socket stays the same:
      sk_attach_filter(prog, struct sock *)/sk_detach_filter(struct sock *)
      and SK_RUN_FILTER(struct sk_filter *, ctx) to execute a program
      which is used by sockets, tun, af_packet
      
      API for 'unattached' BPF programs becomes:
      bpf_prog_create(struct bpf_prog **)/bpf_prog_destroy(struct bpf_prog *)
      and BPF_PROG_RUN(struct bpf_prog *, ctx) to execute a program
      which is used by isdn, ppp, team, seccomp, ptp, xt_bpf, cls_bpf, test_bpf
      Signed-off-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7ae457c1
  3. 31 7月, 2014 1 次提交
    • H
      random32: mix in entropy from core to late initcall · 4ada97ab
      Hannes Frederic Sowa 提交于
      Currently, we have a 3-stage seeding process in prandom():
      
      Phase 1 is from the early actual initialization of prandom()
      subsystem which happens during core_initcall() and remains
      most likely until the beginning of late_initcall() phase.
      Here, the system might not have enough entropy available
      for seeding with strong randomness from the random driver.
      That means, we currently have a 32bit weak LCG() seeding
      the PRNG status register 1 and mixing that successively
      into the other 3 registers just to get it up and running.
      
      Phase 2 starts with late_initcall() phase resp. when the
      random driver has initialized its non-blocking pool with
      enough entropy. At that time, we throw away *all* inner
      state from its 4 registers and do a full reseed with strong
      randomness.
      
      Phase 3 starts right after that and does a periodic reseed
      with random slack of status register 1 by a strong random
      source again.
      
      A problem in phase 1 is that during bootup data structures
      can be initialized, e.g. on module load time, and thus access
      a weakly seeded prandom and are never changed for the rest
      of their live-time, thus carrying along the results from a
      week seed. Lets make sure that current but also future users
      access a possibly better early seeded prandom.
      
      This patch therefore improves phase 1 by trying to make it
      more 'unpredictable' through mixing in seed from a possible
      hardware source. Now, the mix-in xors inner state with the
      outcome of either of the two functions arch_get_random_{,seed}_int(),
      preferably arch_get_random_seed_int() as it likely represents
      a non-deterministic random bit generator in hw rather than
      a cryptographically secure PRNG in hw. However, not all might
      have the first one, so we use the PRNG as a fallback if
      available. As we xor the seed into the current state, the
      worst case would be that a hardware source could be unverifiable
      compromised or backdoored. In that case nevertheless it
      would be as good as our original early seeding function
      prandom_seed_very_weak() since we mix through xor which is
      entropy preserving.
      
      Joint work with Daniel Borkmann.
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Signed-off-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4ada97ab
  4. 25 7月, 2014 1 次提交
  5. 24 7月, 2014 1 次提交
  6. 23 7月, 2014 1 次提交
  7. 21 7月, 2014 1 次提交
  8. 18 7月, 2014 2 次提交