1. 06 11月, 2015 15 次提交
  2. 04 11月, 2015 17 次提交
    • P
      audit: make audit_log_common_recv_msg() a void function · 233a6866
      Paul Moore 提交于
      It always returns zero and no one is checking the return value.
      Signed-off-by: NPaul Moore <pmoore@redhat.com>
      233a6866
    • S
      audit: removing unused variable · c5ea6efd
      Saurabh Sengar 提交于
      Variable rc in not required as it is just used for unchanged for return,
      and return is always 0 in the function.
      Signed-off-by: NSaurabh Sengar <saurabh.truth@gmail.com>
      [PM: fixed spelling errors in description]
      Signed-off-by: NPaul Moore <pmoore@redhat.com>
      c5ea6efd
    • S
      audit: fix comment block whitespace · 725131ef
      Scott Matheina 提交于
      Signed-off-by: NScott Matheina <scott@matheina.com>
      [PM: fixed subject line]
      Signed-off-by: NPaul Moore <pmoore@redhat.com>
      725131ef
    • Y
      audit: audit_tree_match can be boolean · 6f1b5d7a
      Yaowei Bai 提交于
      This patch makes audit_tree_match return bool to improve readability
      due to this particular function only using either one or zero as its
      return value.
      
      No functional change.
      Signed-off-by: NYaowei Bai <bywxiaobai@163.com>
      [PM: tweaked the subject line]
      Signed-off-by: NPaul Moore <pmoore@redhat.com>
      6f1b5d7a
    • Y
      audit: audit_string_contains_control can be boolean · 9fcf836b
      Yaowei Bai 提交于
      This patch makes audit_string_contains_control return bool to improve
      readability due to this particular function only using either one or
      zero as its return value.
      Signed-off-by: NYaowei Bai <bywxiaobai@163.com>
      [PM: tweaked subject line]
      Signed-off-by: NPaul Moore <pmoore@redhat.com>
      9fcf836b
    • R
      audit: try harder to send to auditd upon netlink failure · 32a1dbae
      Richard Guy Briggs 提交于
      There are several reports of the kernel losing contact with auditd when
      it is, in fact, still running.  When this happens, kernel syslogs show:
      	"audit: *NO* daemon at audit_pid=<pid>"
      although auditd is still running, and is apparently happy, listening on
      the netlink socket. The pid in the "*NO* daemon" message matches the pid
      of the running auditd process.  Restarting auditd solves this.
      
      The problem appears to happen randomly, and doesn't seem to be strongly
      correlated to the rate of audit events being logged.  The problem
      happens fairly regularly (every few days), but not yet reproduced to
      order.
      
      On production kernels, BUG_ON() is a no-op, so any error will trigger
      this.
      
      Commit 34eab0a7 ("audit: prevent an older auditd shutdown from
      orphaning a newer auditd startup") eliminates one possible cause.  This
      isn't the case here, since the PID in the error message and the PID of
      the running auditd match.
      
      The primary expected cause of error here is -ECONNREFUSED when the audit
      daemon goes away, when netlink_getsockbyportid() can't find the auditd
      portid entry in the netlink audit table (or there is no receive
      function).  If -EPERM is returned, that situation isn't likely to be
      resolved in a timely fashion without administrator intervention.  In
      both cases, reset the audit_pid.  This does not rule out a race
      condition.  SELinux is expected to return zero since this isn't an INET
      or INET6 socket.  Other LSMs may have other return codes.  Log the error
      code for better diagnosis in the future.
      
      In the case of -ENOMEM, the situation could be temporary, based on local
      or general availability of buffers.  -EAGAIN should never happen since
      the netlink audit (kernel) socket is set to MAX_SCHEDULE_TIMEOUT.
      -ERESTARTSYS and -EINTR are not expected since this kernel thread is not
      expected to receive signals.  In these cases (or any other unexpected
      ones for now), report the error and re-schedule the thread, retrying up
      to 5 times.
      
      v2:
      	Removed BUG_ON().
      	Moved comma in pr_*() statements.
      	Removed audit_strerror() text.
      Reported-by: NVipin Rathor <v.rathor@gmail.com>
      Reported-by: <ctcard@hotmail.com>
      Signed-off-by: NRichard Guy Briggs <rgb@redhat.com>
      [PM: applied rgb's fixup patch to correct audit_log_lost() format issues]
      Signed-off-by: NPaul Moore <pmoore@redhat.com>
      32a1dbae
    • S
      tracing: Put back comma for empty fields in boot string parsing · 43ed3843
      Steven Rostedt (Red Hat) 提交于
      Both early_enable_events() and apply_trace_boot_options() parse a boot
      string that may get parsed later on. They both use strsep() which converts a
      comma into a nul character. To still allow the boot string to be parsed
      again the same way, the nul character gets converted back to a comma after
      the token is processed.
      
      The problem is that these two functions check for an empty parameter (two
      commas in a row ",,"), and continue the loop if the parameter is empty, but
      fails to place the comma back. In this case, the second parsing will end at
      this blank field, and not process fields afterward.
      
      In most cases, users should not have an empty field, but if its going to be
      checked, the code might as well be correct.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      43ed3843
    • J
      tracing: Apply tracer specific options from kernel command line. · a4d1e688
      Jiaxing Wang 提交于
      Currently, the trace_options parameter is only applied in
      tracer_alloc_buffers() when global_trace.current_trace is nop_trace,
      so a tracer specific option will not be applied even when the specific
      tracer is also enabled from kernel command line. For example, the
      'func_stack_trace' option can't be enabled with the following kernel
      parameter:
      
        ftrace=function ftrace_filter=kfree trace_options=func_stack_trace
      
      We can enable tracer specific options by simply apply the options again
      if the specific tracer is also supplied from command line and started
      in register_tracer().
      
      To make trace_boot_options_buf can be parsed again, a comma and a space
      is put back if they were replaced by strsep and strstrip respectively.
      
      Also make register_tracer() be __init to access the __init data, and
      in fact register_tracer is only called from __init code.
      
      Link: http://lkml.kernel.org/r/1446599669-9294-1-git-send-email-hello.wjx@gmail.comSigned-off-by: NJiaxing Wang <hello.wjx@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      a4d1e688
    • L
      atomic: remove all traces of READ_ONCE_CTRL() and atomic*_read_ctrl() · 105ff3cb
      Linus Torvalds 提交于
      This seems to be a mis-reading of how alpha memory ordering works, and
      is not backed up by the alpha architecture manual.  The helper functions
      don't do anything special on any other architectures, and the arguments
      that support them being safe on other architectures also argue that they
      are safe on alpha.
      
      Basically, the "control dependency" is between a previous read and a
      subsequent write that is dependent on the value read.  Even if the
      subsequent write is actually done speculatively, there is no way that
      such a speculative write could be made visible to other cpu's until it
      has been committed, which requires validating the speculation.
      
      Note that most weakely ordered architectures (very much including alpha)
      do not guarantee any ordering relationship between two loads that depend
      on each other on a control dependency:
      
          read A
          if (val == 1)
              read B
      
      because the conditional may be predicted, and the "read B" may be
      speculatively moved up to before reading the value A.  So we require the
      user to insert a smp_rmb() between the two accesses to be correct:
      
          read A;
          if (A == 1)
              smp_rmb()
              read B
      
      Alpha is further special in that it can break that ordering even if the
      *address* of B depends on the read of A, because the cacheline that is
      read later may be stale unless you have a memory barrier in between the
      pointer read and the read of the value behind a pointer:
      
          read ptr
          read offset(ptr)
      
      whereas all other weakly ordered architectures guarantee that the data
      dependency (as opposed to just a control dependency) will order the two
      accesses.  As a result, alpha needs a "smp_read_barrier_depends()" in
      between those two reads for them to be ordered.
      
      The coontrol dependency that "READ_ONCE_CTRL()" and "atomic_read_ctrl()"
      had was a control dependency to a subsequent *write*, however, and
      nobody can finalize such a subsequent write without having actually done
      the read.  And were you to write such a value to a "stale" cacheline
      (the way the unordered reads came to be), that would seem to lose the
      write entirely.
      
      So the things that make alpha able to re-order reads even more
      aggressively than other weak architectures do not seem to be relevant
      for a subsequent write.  Alpha memory ordering may be strange, but
      there's no real indication that it is *that* strange.
      
      Also, the alpha architecture reference manual very explicitly talks
      about the definition of "Dependence Constraints" in section 5.6.1.7,
      where a preceding read dominates a subsequent write.
      
      Such a dependence constraint admittedly does not impose a BEFORE (alpha
      architecture term for globally visible ordering), but it does guarantee
      that there can be no "causal loop".  I don't see how you could avoid
      such a loop if another cpu could see the stored value and then impact
      the value of the first read.  Put another way: the read and the write
      could not be seen as being out of order wrt other cpus.
      
      So I do not see how these "x_ctrl()" functions can currently be necessary.
      
      I may have to eat my words at some point, but in the absense of clear
      proof that alpha actually needs this, or indeed even an explanation of
      how alpha could _possibly_ need it, I do not believe these functions are
      called for.
      
      And if it turns out that alpha really _does_ need a barrier for this
      case, that barrier still should not be "smp_read_barrier_depends()".
      We'd have to make up some new speciality barrier just for alpha, along
      with the documentation for why it really is necessary.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul E McKenney <paulmck@us.ibm.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      105ff3cb
    • S
      ring_buffer: Remove unneeded smp_wmb() before wakeup of reader benchmark · 54ed1444
      Steven Rostedt (Red Hat) 提交于
      wake_up_process() has a memory barrier before doing anything, thus adding a
      memory barrier before calling it is redundant.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      54ed1444
    • S
      tracing: Allow dumping traces without tracking trace started cpus · 919cd979
      Sasha Levin 提交于
      We don't init iter->started when dumping the ftrace buffer, and there's no
      real need to do so - so allow skipping that check if the iter doesn't have
      an initialized ->started cpumask.
      
      Link: http://lkml.kernel.org/r/1441385156-27279-1-git-send-email-sasha.levin@oracle.comSigned-off-by: NSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      919cd979
    • P
      ring_buffer: Fix more races when terminating the producer in the benchmark · f47cb66d
      Petr Mladek 提交于
      The commit b44754d8 ("ring_buffer: Allow to exit the ring
      buffer benchmark immediately") added a hack into ring_buffer_producer()
      that set @kill_test when kthread_should_stop() returned true. It improved
      the situation a lot. It stopped the kthread in most cases because
      the producer spent most of the time in the patched while cycle.
      
      But there are still few possible races when kthread_should_stop()
      is set outside of the cycle. Then we do not set @kill_test and
      some other checks pass.
      
      This patch adds a better fix. It renames @test_kill/TEST_KILL() into
      a better descriptive @test_error/TEST_ERROR(). Also it introduces
      break_test() function that checks for both @test_error and
      kthread_should_stop().
      
      The new function is used in the producer when the check for @test_error
      is not enough. It is not used in the consumer because its state
      is manipulated by the producer via the "reader_finish" variable.
      
      Also we add a missing check into ring_buffer_producer_thread()
      between setting TASK_INTERRUPTIBLE and calling schedule_timeout().
      Otherwise, we might miss a wakeup from kthread_stop().
      
      Link: http://lkml.kernel.org/r/1441629518-32712-3-git-send-email-pmladek@suse.comSigned-off-by: NPetr Mladek <pmladek@suse.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f47cb66d
    • P
      ring_buffer: Do no not complete benchmark reader too early · 8b46ff69
      Petr Mladek 提交于
      It seems that complete(&read_done) might be called too early
      in some situations.
      
      1st scenario:
      -------------
      
      CPU0					CPU1
      
      ring_buffer_producer_thread()
        wake_up_process(consumer);
        wait_for_completion(&read_start);
      
      					ring_buffer_consumer_thread()
      					  complete(&read_start);
      
        ring_buffer_producer()
          # producing data in
          # the do-while cycle
      
      					  ring_buffer_consumer();
      					    # reading data
      					    # got error
      					    # set kill_test = 1;
      					    set_current_state(
      						TASK_INTERRUPTIBLE);
      					    if (reader_finish)  # false
      					    schedule();
      
          # producer still in the middle of
          # do-while cycle
          if (consumer && !(cnt % wakeup_interval))
            wake_up_process(consumer);
      
      					    # spurious wakeup
      					    while (!reader_finish &&
      						   !kill_test)
      					    # leaving because
      					    # kill_test == 1
      					    reader_finish = 0;
      					    complete(&read_done);
      
      1st BANG: We might access uninitialized "read_done" if this is the
      	  the first round.
      
          # producer finally leaving
          # the do-while cycle because kill_test == 1;
      
          if (consumer) {
            reader_finish = 1;
            wake_up_process(consumer);
            wait_for_completion(&read_done);
      
      2nd BANG: This will never complete because consumer already did
      	  the completion.
      
      2nd scenario:
      -------------
      
      CPU0					CPU1
      
      ring_buffer_producer_thread()
        wake_up_process(consumer);
        wait_for_completion(&read_start);
      
      					ring_buffer_consumer_thread()
      					  complete(&read_start);
      
        ring_buffer_producer()
          # CPU3 removes the module	  <--- difference from
          # and stops producer          <--- the 1st scenario
          if (kthread_should_stop())
            kill_test = 1;
      
      					  ring_buffer_consumer();
      					    while (!reader_finish &&
      						   !kill_test)
      					    # kill_test == 1 => we never go
      					    # into the top level while()
      					    reader_finish = 0;
      					    complete(&read_done);
      
          # producer still in the middle of
          # do-while cycle
          if (consumer && !(cnt % wakeup_interval))
            wake_up_process(consumer);
      
      					    # spurious wakeup
      					    while (!reader_finish &&
      						   !kill_test)
      					    # leaving because kill_test == 1
      					    reader_finish = 0;
      					    complete(&read_done);
      
      BANG: We are in the same "bang" situations as in the 1st scenario.
      
      Root of the problem:
      --------------------
      
      ring_buffer_consumer() must complete "read_done" only when "reader_finish"
      variable is set. It must not be skipped due to other conditions.
      
      Note that we still must keep the check for "reader_finish" in a loop
      because there might be spurious wakeups as described in the
      above scenarios.
      
      Solution:
      ----------
      
      The top level cycle in ring_buffer_consumer() will finish only when
      "reader_finish" is set. The data will be read in "while-do" cycle
      so that they are not read after an error (kill_test == 1)
      or a spurious wake up.
      
      In addition, "reader_finish" is manipulated by the producer thread.
      Therefore we add READ_ONCE() to make sure that the fresh value is
      read in each cycle. Also we add the corresponding barrier
      to synchronize the sleep check.
      
      Next we set the state back to TASK_RUNNING for the situation where we
      did not sleep.
      
      Just from paranoid reasons, we initialize both completions statically.
      This is safer, in case there are other races that we are unaware of.
      
      As a side effect we could remove the memory barrier from
      ring_buffer_producer_thread(). IMHO, this was the reason for
      the barrier. ring_buffer_reset() uses spin locks that should
      provide the needed memory barrier for using the buffer.
      
      Link: http://lkml.kernel.org/r/1441629518-32712-2-git-send-email-pmladek@suse.comSigned-off-by: NPetr Mladek <pmladek@suse.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      8b46ff69
    • D
      tracing: Remove redundant TP_ARGS redefining · fb8c2293
      Dmitry Safonov 提交于
      TP_ARGS is not used anywhere in trace.h nor trace_entries.h
      Firstly, I left just #undef TP_ARGS and had no errors - remove it.
      
      Link: http://lkml.kernel.org/r/1446576560-14085-1-git-send-email-0x7f454c46@gmail.comSigned-off-by: NDmitry Safonov <0x7f454c46@gmail.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      fb8c2293
    • S
      tracing: Rename max_stack_lock to stack_trace_max_lock · d332736d
      Steven Rostedt (Red Hat) 提交于
      Now that max_stack_lock is a global variable, it requires a naming
      convention that is unlikely to collide. Rename it to the same naming
      convention that the other stack_trace variables have.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d332736d
    • A
      tracing: Allow arch-specific stack tracer · bb99d8cc
      AKASHI Takahiro 提交于
      A stack frame may be used in a different way depending on cpu architecture.
      Thus it is not always appropriate to slurp the stack contents, as current
      check_stack() does, in order to calcurate a stack index (height) at a given
      function call. At least not on arm64.
      In addition, there is a possibility that we will mistakenly detect a stale
      stack frame which has not been overwritten.
      
      This patch makes check_stack() a weak function so as to later implement
      arch-specific version.
      
      Link: http://lkml.kernel.org/r/1446182741-31019-5-git-send-email-takahiro.akashi@linaro.orgSigned-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      bb99d8cc
    • D
      bpf, verifier: annotate verbose printer with __printf · 1d056d9c
      Daniel Borkmann 提交于
      The verbose() printer dumps the verifier state to user space, so let gcc
      take care to check calls to verbose() for (future) errors. make with W=1
      correctly suggests: function might be possible candidate for 'gnu_printf'
      format attribute [-Wsuggest-attribute=format].
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1d056d9c
  3. 03 11月, 2015 8 次提交
    • D
      bpf: add support for persistent maps/progs · b2197755
      Daniel Borkmann 提交于
      This work adds support for "persistent" eBPF maps/programs. The term
      "persistent" is to be understood that maps/programs have a facility
      that lets them survive process termination. This is desired by various
      eBPF subsystem users.
      
      Just to name one example: tc classifier/action. Whenever tc parses
      the ELF object, extracts and loads maps/progs into the kernel, these
      file descriptors will be out of reach after the tc instance exits.
      So a subsequent tc invocation won't be able to access/relocate on this
      resource, and therefore maps cannot easily be shared, f.e. between the
      ingress and egress networking data path.
      
      The current workaround is that Unix domain sockets (UDS) need to be
      instrumented in order to pass the created eBPF map/program file
      descriptors to a third party management daemon through UDS' socket
      passing facility. This makes it a bit complicated to deploy shared
      eBPF maps or programs (programs f.e. for tail calls) among various
      processes.
      
      We've been brainstorming on how we could tackle this issue and various
      approches have been tried out so far, which can be read up further in
      the below reference.
      
      The architecture we eventually ended up with is a minimal file system
      that can hold map/prog objects. The file system is a per mount namespace
      singleton, and the default mount point is /sys/fs/bpf/. Any subsequent
      mounts within a given namespace will point to the same instance. The
      file system allows for creating a user-defined directory structure.
      The objects for maps/progs are created/fetched through bpf(2) with
      two new commands (BPF_OBJ_PIN/BPF_OBJ_GET). I.e. a bpf file descriptor
      along with a pathname is being passed to bpf(2) that in turn creates
      (we call it eBPF object pinning) the file system nodes. Only the pathname
      is being passed to bpf(2) for getting a new BPF file descriptor to an
      existing node. The user can use that to access maps and progs later on,
      through bpf(2). Removal of file system nodes is being managed through
      normal VFS functions such as unlink(2), etc. The file system code is
      kept to a very minimum and can be further extended later on.
      
      The next step I'm working on is to add dump eBPF map/prog commands
      to bpf(2), so that a specification from a given file descriptor can
      be retrieved. This can be used by things like CRIU but also applications
      can inspect the meta data after calling BPF_OBJ_GET.
      
      Big thanks also to Alexei and Hannes who significantly contributed
      in the design discussion that eventually let us end up with this
      architecture here.
      
      Reference: https://lkml.org/lkml/2015/10/15/925Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b2197755
    • D
      bpf: consolidate bpf_prog_put{, _rcu} dismantle paths · e9d8afa9
      Daniel Borkmann 提交于
      We currently have duplicated cleanup code in bpf_prog_put() and
      bpf_prog_put_rcu() cleanup paths. Back then we decided that it was
      not worth it to make it a common helper called by both, but with
      the recent addition of resource charging, we could have avoided
      the fix in commit ac00737f ("bpf: Need to call bpf_prog_uncharge_memlock
      from bpf_prog_put") if we would have had only a single, common path.
      We can simplify it further by assigning aux->prog only once during
      allocation time.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e9d8afa9
    • D
      bpf: align and clean bpf_{map,prog}_get helpers · c2101297
      Daniel Borkmann 提交于
      Add a bpf_map_get() function that we're going to use later on and
      align/clean the remaining helpers a bit so that we have them a bit
      more consistent:
      
        - __bpf_map_get() and __bpf_prog_get() that both work on the fd
          struct, check whether the descriptor is eBPF and return the
          pointer to the map/prog stored in the private data.
      
          Also, we can return f.file->private_data directly, the function
          signature is enough of a documentation already.
      
        - bpf_map_get() and bpf_prog_get() that both work on u32 user fd,
          call their respective __bpf_map_get()/__bpf_prog_get() variants,
          and take a reference.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c2101297
    • D
      bpf: abstract anon_inode_getfd invocations · aa79781b
      Daniel Borkmann 提交于
      Since we're going to use anon_inode_getfd() invocations in more than just
      the current places, make a helper function for both, so that we only need
      to pass a map/prog pointer to the helper itself in order to get a fd. The
      new helpers are called bpf_map_new_fd() and bpf_prog_new_fd().
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      aa79781b
    • Y
      bpf: convert hashtab lock to raw lock · ac00881f
      Yang Shi 提交于
      When running bpf samples on rt kernel, it reports the below warning:
      
      BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:917
      in_atomic(): 1, irqs_disabled(): 128, pid: 477, name: ping
      Preemption disabled at:[<ffff80000017db58>] kprobe_perf_func+0x30/0x228
      
      CPU: 3 PID: 477 Comm: ping Not tainted 4.1.10-rt8 #4
      Hardware name: Freescale Layerscape 2085a RDB Board (DT)
      Call trace:
      [<ffff80000008a5b0>] dump_backtrace+0x0/0x128
      [<ffff80000008a6f8>] show_stack+0x20/0x30
      [<ffff8000007da90c>] dump_stack+0x7c/0xa0
      [<ffff8000000e4830>] ___might_sleep+0x188/0x1a0
      [<ffff8000007e2200>] rt_spin_lock+0x28/0x40
      [<ffff80000018bf9c>] htab_map_update_elem+0x124/0x320
      [<ffff80000018c718>] bpf_map_update_elem+0x40/0x58
      [<ffff800000187658>] __bpf_prog_run+0xd48/0x1640
      [<ffff80000017ca6c>] trace_call_bpf+0x8c/0x100
      [<ffff80000017db58>] kprobe_perf_func+0x30/0x228
      [<ffff80000017dd84>] kprobe_dispatcher+0x34/0x58
      [<ffff8000007e399c>] kprobe_handler+0x114/0x250
      [<ffff8000007e3bf4>] kprobe_breakpoint_handler+0x1c/0x30
      [<ffff800000085b80>] brk_handler+0x88/0x98
      [<ffff8000000822f0>] do_debug_exception+0x50/0xb8
      Exception stack(0xffff808349687460 to 0xffff808349687580)
      7460: 4ca2b600 ffff8083 4a3a7000 ffff8083 49687620 ffff8083 0069c5f8 ffff8000
      7480: 00000001 00000000 007e0628 ffff8000 496874b0 ffff8083 007e1de8 ffff8000
      74a0: 496874d0 ffff8083 0008e04c ffff8000 00000001 00000000 4ca2b600 ffff8083
      74c0: 00ba2e80 ffff8000 49687528 ffff8083 49687510 ffff8083 000e5c70 ffff8000
      74e0: 00c22348 ffff8000 00000000 ffff8083 49687510 ffff8083 000e5c74 ffff8000
      7500: 4ca2b600 ffff8083 49401800 ffff8083 00000001 00000000 00000000 00000000
      7520: 496874d0 ffff8083 00000000 00000000 00000000 00000000 00000000 00000000
      7540: 2f2e2d2c 33323130 00000000 00000000 4c944500 ffff8083 00000000 00000000
      7560: 00000000 00000000 008751e0 ffff8000 00000001 00000000 124e2d1d 00107b77
      
      Convert hashtab lock to raw lock to avoid such warning.
      Signed-off-by: NYang Shi <yang.shi@linaro.org>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ac00881f
    • Y
      tracing: ftrace_event_is_function() can return boolean · c6650b2e
      Yaowei Bai 提交于
      Make ftrace_event_is_function() return bool to improve readability
      due to this particular function only using either one or zero as its
      return value.
      
      No functional change.
      
      Link: http://lkml.kernel.org/r/1443537816-5788-9-git-send-email-bywxiaobai@163.comSigned-off-by: NYaowei Bai <bywxiaobai@163.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      c6650b2e
    • Y
      tracing: is_legal_op() can return boolean · 907bff91
      Yaowei Bai 提交于
      Make is_legal_op() return bool to improve readability due to this particular
      function only using either one or zero as its return value.
      
      No functional change.
      
      Link: http://lkml.kernel.org/r/1443537816-5788-8-git-send-email-bywxiaobai@163.comSigned-off-by: NYaowei Bai <bywxiaobai@163.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      907bff91
    • Y
      ring-buffer: rb_event_is_commit() can return boolean · cdb2a0a9
      Yaowei Bai 提交于
      Make rb_event_is_commit() return bool to improve readability
      due to this particular function only using either one or zero as its
      return value.
      
      No functional change.
      
      Link: http://lkml.kernel.org/r/1443537816-5788-7-git-send-email-bywxiaobai@163.comSigned-off-by: NYaowei Bai <bywxiaobai@163.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      cdb2a0a9