1. 10 4月, 2016 4 次提交
    • M
      NFC: pn533: Separate physical layer from the core implementation · 9815c7cf
      Michael Thalmeier 提交于
      The driver now has all core stuff isolated in one file, and all
      the hardware link specifics in another. Writing a pn533 driver
      on top of another hardware link is now just a matter of adding a
      new file for that new hardware specifics.
      
      The first user of this separation will be the i2c based pn532
      driver that reuses pn533 core implementation on top of an i2c
      layer.
      Signed-off-by: NMichael Thalmeier <michael.thalmeier@hale.at>
      Signed-off-by: NSamuel Ortiz <sameo@linux.intel.com>
      9815c7cf
    • M
      NFC: pn533: Fix socket deadlock · 37f895d7
      Michael Thalmeier 提交于
      A deadlock can occur when the NFC raw socket is closed while
      the driver is processing a command.
      
      Following is the call graph of the affected situation:
      
      send data via raw_sock:
      -------------
      rawsock_tx_work
        sock_hold => socket refcnt++
        nfc_data_exchange => cb = rawsock_data_exchange_complete
      
          ops->im_transceive = pn533_transceive => arg->cb = db
                                     = rawsock_data_exchange_complete
      
            pn533_send_data_async => cb = pn533_data_exchange_complete
      
              __pn533_send_async => cmd->complete_cb = cb
                                    = pn533_data_exchange_complete
      
                if_ops->send_frame_async
      
      response:
      --------
      pn533_recv_response
        queue_work(priv->wq, &priv->cmd_complete_work)
      
      pn533_wq_cmd_complete
      
        pn533_send_async_complete
      
          cmd->complete_cb() = pn533_data_exchange_complete()
      
            arg->cb() = rawsock_data_exchange_complete()
      
              sock_put => socket refcnt-- => If the corresponding
                          socket gets closed in the meantime socket
                          will be destructed
      
                sk_free
      
                  __sk_free
      
                    sk->sk_destruct = rawsock_destruct
      
                      nfc_deactivate_target
      
                        ops->deactivate_target = pn533_deactivate_target
      
                          pn533_send_cmd_sync
      
                            pn533_send_cmd_async
      
                              __pn533_send_async
      
                                list_add_tail(&cmd->queue,&dev->cmd_queue)
                                        => add to command list because
                                           a command is currently
                                           processed
      
                              wait_for_completion
                                         => the workqueue thread waits
                                            here because it is the one
                                            processing the commands
                                               => deadlock
      
      To fix the deadlock pn533_deactivate_target is changed to
      issue the PN533_CMD_IN_RELEASE command in async mode. This
      way nothing blocks and the release command is executed after
      the current command.
      Signed-off-by: NMichael Thalmeier <michael.thalmeier@hale.at>
      Signed-off-by: NSamuel Ortiz <sameo@linux.intel.com>
      37f895d7
    • M
      NFC: pn533: Send ATR_REQ only if NFC_PROTO_NFC_DEP bit is set · e997ebbe
      Michael Thalmeier 提交于
      Currently it is not possible to only poll for passive targets
      with the pn533 driver. To change this ATR_REQ is only sent when
      NFC_PROTO_NFC_DEP is explicitly requested in poll_protocols.
      As most implementations (e.g. neard) poll for all protocols
      that are reported to be supported by the adapter, this should
      not have much of an effect on current implementations.
      Signed-off-by: NMichael Thalmeier <michael.thalmeier@hale.at>
      Signed-off-by: NSamuel Ortiz <sameo@linux.intel.com>
      e997ebbe
    • E
      ipv6: fix inet6_lookup_listener() · 03c5b534
      Eric Dumazet 提交于
      A stupid refactoring bug in inet6_lookup_listener() needs to be fixed
      in order to get proper SO_REUSEPORT behavior.
      
      Fixes: 3b24d854 ("tcp/dccp: do not touch listener sk_refcnt under synflood")
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Reported-by: NMaciej Żenczykowski <maze@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      03c5b534
  2. 09 4月, 2016 33 次提交
  3. 08 4月, 2016 3 次提交
    • D
      Merge branch 'bpf-tracepoints' · f8711655
      David S. Miller 提交于
      Alexei Starovoitov says:
      
      ====================
      allow bpf attach to tracepoints
      
      Hi Steven, Peter,
      
      v1->v2: addressed Peter's comments:
      - fixed wording in patch 1, added ack
      - refactored 2nd patch into 3:
      2/10 remove unused __perf_addr macro which frees up
      an argument in perf_trace_buf_submit
      3/10 split perf_trace_buf_prepare into alloc and update parts, so that bpf
      programs don't have to pay performance penalty for update of struct trace_entry
      which is not going to be accessed by bpf
      4/10 actual addition of bpf filter to perf tracepoint handler is now trivial
      and bpf prog can be used as proper filter of tracepoints
      
      v1 cover:
      last time we discussed bpf+tracepoints it was a year ago [1] and the reason
      we didn't proceed with that approach was that bpf would make arguments
      arg1, arg2 to trace_xx(arg1, arg2) call to be exposed to bpf program
      and that was considered unnecessary extension of abi. Back then I wanted
      to avoid the cost of buffer alloc and field assign part in all
      of the tracepoints, but looks like when optimized the cost is acceptable.
      So this new apporach doesn't expose any new abi to bpf program.
      The program is looking at tracepoint fields after they were copied
      by perf_trace_xx() and described in /sys/kernel/debug/tracing/events/xxx/format
      We made a tool [2] that takes arguments from /sys/.../format and works as:
      $ tplist.py -v random:urandom_read
          int got_bits;
          int pool_left;
          int input_left;
      Then these fields can be copy-pasted into bpf program like:
      struct urandom_read {
          __u64 hidden_pad;
          int got_bits;
          int pool_left;
          int input_left;
      };
      and the program can use it:
      SEC("tracepoint/random/urandom_read")
      int bpf_prog(struct urandom_read *ctx)
      {
          return ctx->pool_left > 0 ? 1 : 0;
      }
      This way the program can access tracepoint fields faster than
      equivalent bpf+kprobe program, which is the main goal of these patches.
      
      Patch 1-4 are simple changes in perf core side, please review.
      I'd like to take the whole set via net-next tree, since the rest of
      the patches might conflict with other bpf work going on in net-next
      and we want to avoid cross-tree merge conflicts.
      Alternatively we can put patches 1-4 into both tip and net-next.
      
      Patch 9 is an example of access to tracepoint fields from bpf prog.
      Patch 10 is a micro benchmark for bpf+kprobe vs bpf+tracepoint.
      
      Note that for actual tracing tools the user doesn't need to
      run tplist.py and copy-paste fields manually. The tools do it
      automatically. Like argdist tool [3] can be used as:
      $ argdist -H 't:block:block_rq_complete():u32:nr_sector'
      where 'nr_sector' is name of tracepoint field taken from
      /sys/kernel/debug/tracing/events/block/block_rq_complete/format
      and appropriate bpf program is generated on the fly.
      
      [1] http://thread.gmane.org/gmane.linux.kernel.api/8127/focus=8165
      [2] https://github.com/iovisor/bcc/blob/master/tools/tplist.py
      [3] https://github.com/iovisor/bcc/blob/master/tools/argdist.py
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f8711655
    • A
      samples/bpf: add tracepoint vs kprobe performance tests · e3edfdec
      Alexei Starovoitov 提交于
      the first microbenchmark does
      fd=open("/proc/self/comm");
      for() {
        write(fd, "test");
      }
      and on 4 cpus in parallel:
                                            writes per sec
      base (no tracepoints, no kprobes)         930k
      with kprobe at __set_task_comm()          420k
      with tracepoint at task:task_rename       730k
      
      For kprobe + full bpf program manully fetches oldcomm, newcomm via bpf_probe_read.
      For tracepint bpf program does nothing, since arguments are copied by tracepoint.
      
      2nd microbenchmark does:
      fd=open("/dev/urandom");
      for() {
        read(fd, buf);
      }
      and on 4 cpus in parallel:
                                             reads per sec
      base (no tracepoints, no kprobes)         300k
      with kprobe at urandom_read()             279k
      with tracepoint at random:urandom_read    290k
      
      bpf progs attached to kprobe and tracepoint are noop.
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e3edfdec
    • A
      samples/bpf: tracepoint example · 3c9b1644
      Alexei Starovoitov 提交于
      modify offwaketime to work with sched/sched_switch tracepoint
      instead of kprobe into finish_task_switch
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3c9b1644