1. 17 1月, 2015 2 次提交
    • L
      Merge tag 'nfs-for-3.19-2' of git://git.linux-nfs.org/projects/trondmy/linux-nfs · a2a32cd1
      Linus Torvalds 提交于
      Pull NFS client bugfixes from Trond Myklebust:
       "Highlights include:
      
         - Stable fix for a NFSv3/lockd race
         - Fixes for several NFSv4.1 client id trunking bugs
         - Remove an incorrect test when checking for delegated opens"
      
      * tag 'nfs-for-3.19-2' of git://git.linux-nfs.org/projects/trondmy/linux-nfs:
        NFSv4: Remove incorrect check in can_open_delegated()
        NFS: Ignore transport protocol when detecting server trunking
        NFSv4/v4.1: Verify the client owner id during trunking detection
        NFSv4: Cache the NFSv4/v4.1 client owner_id in the struct nfs_client
        NFSv4.1: Fix client id trunking on Linux
        LOCKD: Fix a race when initialising nlmsvc_timeout
      a2a32cd1
    • L
      Merge tag 'trace-fixes-v3.19-rc3' of... · 23aa4b41
      Linus Torvalds 提交于
      Merge tag 'trace-fixes-v3.19-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
      
      Pull ftrace fixes from Steven Rostedt:
       "This holds a few fixes to the ftrace infrastructure as well as the
        mixture of function graph tracing and kprobes.
      
        When jprobes and function graph tracing is enabled at the same time it
        will crash the system:
      
            # modprobe jprobe_example
            # echo function_graph > /sys/kernel/debug/tracing/current_tracer
      
        After the first fork (jprobe_example probes it), the system will
        crash.
      
        This is due to the way jprobes copies the stack frame and does not do
        a normal function return.  This messes up with the function graph
        tracing accounting which hijacks the return address from the stack and
        replaces it with a hook function.  It saves the return addresses in a
        separate stack to put back the correct return address when done.  But
        because the jprobe functions do not do a normal return, their stack
        addresses are not put back until the function they probe is called,
        which means that the probed function will get the return address of
        the jprobe handler instead of its own.
      
        The simple fix here was to disable function graph tracing while the
        jprobe handler is being called.
      
        While debugging this I found two minor bugs with the function graph
        tracing.
      
        The first was about the function graph tracer sharing its function
        hash with the function tracer (they both get filtered by the same
        input).  The changing of the set_ftrace_filter would not sync the
        function recording records after a change if the function tracer was
        disabled but the function graph tracer was enabled.  This was due to
        the update only checking one of the ops instead of the shared ops to
        see if they were enabled and should perform the sync.  This caused the
        ftrace accounting to break and a ftrace_bug() would be triggered,
        disabling ftrace until a reboot.
      
        The second was that the check to update records only checked one of
        the filter hashes.  It needs to test both the "filter" and "notrace"
        hashes.  The "filter" hash determines what functions to trace where as
        the "notrace" hash determines what functions not to trace (trace all
        but these).  Both hashes need to be passed to the update code to find
        out what change is being done during the update.  This also broke the
        ftrace record accounting and triggered a ftrace_bug().
      
        This patch set also include two more fixes that were reported
        separately from the kprobe issue.
      
        One was that init_ftrace_syscalls() was called twice at boot up.  This
        is not a major bug, but that call performed a rather large kmalloc
        (NR_syscalls * sizeof(*syscalls_metadata)).  The second call made the
        first one a memory leak, and wastes memory.
      
        The other fix is a regression caused by an update in the v3.19 merge
        window.  The moving to enable events early, moved the enabling before
        PID 1 was created.  The syscall events require setting the
        TIF_SYSCALL_TRACEPOINT for all tasks.  But for_each_process_thread()
        does not include the swapper task (PID 0), and ended up being a nop.
      
        A suggested fix was to add the init_task() to have its flag set, but I
        didn't really want to mess with PID 0 for this minor bug.  Instead I
        disable and re-enable events again at early_initcall() where it use to
        be enabled.  This also handles any other event that might have its own
        reg function that could break at early boot up"
      
      * tag 'trace-fixes-v3.19-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
        tracing: Fix enabling of syscall events on the command line
        tracing: Remove extra call to init_ftrace_syscalls()
        ftrace/jprobes/x86: Fix conflict between jprobes and function graph tracing
        ftrace: Check both notrace and filter for old hash
        ftrace: Fix updating of filters for shared global_ops filters
      23aa4b41
  2. 16 1月, 2015 5 次提交
  3. 15 1月, 2015 21 次提交
  4. 14 1月, 2015 12 次提交