1. 19 1月, 2018 3 次提交
  2. 18 1月, 2018 1 次提交
  3. 17 1月, 2018 5 次提交
  4. 16 1月, 2018 1 次提交
    • R
      bpftool: recognize BPF_PROG_TYPE_CGROUP_DEVICE programs · 45e5e121
      Roman Gushchin 提交于
      Bpftool doesn't recognize BPF_PROG_TYPE_CGROUP_DEVICE programs,
      so the prog show command prints the numeric type value:
      
      $ bpftool prog show
      1: type 15  name bpf_prog1  tag ac9f93dbfd6d9b74
      	loaded_at Jan 15/07:58  uid 0
      	xlated 96B  jited 105B  memlock 4096B
      
      This patch defines the corresponding textual representation:
      
      $ bpftool prog show
      1: cgroup_device  name bpf_prog1  tag ac9f93dbfd6d9b74
      	loaded_at Jan 15/07:58  uid 0
      	xlated 96B  jited 105B  memlock 4096B
      Signed-off-by: NRoman Gushchin <guro@fb.com>
      Cc: Jakub Kicinski <jakub.kicinski@netronome.com>
      Cc: Quentin Monnet <quentin.monnet@netronome.com>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Acked-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      45e5e121
  5. 15 1月, 2018 1 次提交
    • J
      bpf: offload: add map offload infrastructure · a3884572
      Jakub Kicinski 提交于
      BPF map offload follow similar path to program offload.  At creation
      time users may specify ifindex of the device on which they want to
      create the map.  Map will be validated by the kernel's
      .map_alloc_check callback and device driver will be called for the
      actual allocation.  Map will have an empty set of operations
      associated with it (save for alloc and free callbacks).  The real
      device callbacks are kept in map->offload->dev_ops because they
      have slightly different signatures.  Map operations are called in
      process context so the driver may communicate with HW freely,
      msleep(), wait() etc.
      
      Map alloc and free callbacks are muxed via existing .ndo_bpf, and
      are always called with rtnl lock held.  Maps and programs are
      guaranteed to be destroyed before .ndo_uninit (i.e. before
      unregister_netdev() returns).  Map callbacks are invoked with
      bpf_devs_lock *read* locked, drivers must take care of exclusive
      locking if necessary.
      
      All offload-specific branches are marked with unlikely() (through
      bpf_map_is_dev_bound()), given that branch penalty will be
      negligible compared to IO anyway, and we don't want to penalize
      SW path unnecessarily.
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Reviewed-by: NQuentin Monnet <quentin.monnet@netronome.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      a3884572
  6. 14 1月, 2018 1 次提交
  7. 13 1月, 2018 1 次提交
    • A
      selftests/x86: Add test_vsyscall · 352909b4
      Andy Lutomirski 提交于
      This tests that the vsyscall entries do what they're expected to do.
      It also confirms that attempts to read the vsyscall page behave as
      expected.
      
      If changes are made to the vsyscall code or its memory map handling,
      running this test in all three of vsyscall=none, vsyscall=emulate,
      and vsyscall=native are helpful.
      
      (Because it's easy, this also compares the vsyscall results to their
       vDSO equivalents.)
      
      Note to KAISER backporters: please test this under all three
      vsyscall modes.  Also, in the emulate and native modes, make sure
      that test_vsyscall_64 agrees with the command line or config
      option as to which mode you're in.  It's quite easy to mess up
      the kernel such that native mode accidentally emulates
      or vice versa.
      
      Greg, etc: please backport this to all your Meltdown-patched
      kernels.  It'll help make sure the patches didn't regress
      vsyscalls.
      CSigned-off-by: NAndy Lutomirski <luto@kernel.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/2b9c5a174c1d60fd7774461d518aa75598b1d8fd.1515719552.git.luto@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      352909b4
  8. 12 1月, 2018 2 次提交
    • J
      objtool: Allow alternatives to be ignored · 258c7605
      Josh Poimboeuf 提交于
      Getting objtool to understand retpolines is going to be a bit of a
      challenge.  For now, take advantage of the fact that retpolines are
      patched in with alternatives.  Just read the original (sane)
      non-alternative instruction, and ignore the patched-in retpoline.
      
      This allows objtool to understand the control flow *around* the
      retpoline, even if it can't yet follow what's inside.  This means the
      ORC unwinder will fail to unwind from inside a retpoline, but will work
      fine otherwise.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: thomas.lendacky@amd.com
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jiri Kosina <jikos@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: https://lkml.kernel.org/r/1515707194-20531-3-git-send-email-dwmw@amazon.co.uk
      258c7605
    • J
      objtool: Detect jumps to retpoline thunks · 39b73533
      Josh Poimboeuf 提交于
      A direct jump to a retpoline thunk is really an indirect jump in
      disguise.  Change the objtool instruction type accordingly.
      
      Objtool needs to know where indirect branches are so it can detect
      switch statement jump tables.
      
      This fixes a bunch of warnings with CONFIG_RETPOLINE like:
      
        arch/x86/events/intel/uncore_nhmex.o: warning: objtool: nhmex_rbox_msr_enable_event()+0x44: sibling call from callable instruction with modified stack frame
        kernel/signal.o: warning: objtool: copy_siginfo_to_user()+0x91: sibling call from callable instruction with modified stack frame
        ...
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Signed-off-by: NDavid Woodhouse <dwmw@amazon.co.uk>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: gnomes@lxorguk.ukuu.org.uk
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: thomas.lendacky@amd.com
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Jiri Kosina <jikos@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Greg Kroah-Hartman <gregkh@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: https://lkml.kernel.org/r/1515707194-20531-2-git-send-email-dwmw@amazon.co.uk
      39b73533
  9. 11 1月, 2018 1 次提交
    • D
      bpf: arsh is not supported in 32 bit alu thus reject it · 7891a87e
      Daniel Borkmann 提交于
      The following snippet was throwing an 'unknown opcode cc' warning
      in BPF interpreter:
      
        0: (18) r0 = 0x0
        2: (7b) *(u64 *)(r10 -16) = r0
        3: (cc) (u32) r0 s>>= (u32) r0
        4: (95) exit
      
      Although a number of JITs do support BPF_ALU | BPF_ARSH | BPF_{K,X}
      generation, not all of them do and interpreter does neither. We can
      leave existing ones and implement it later in bpf-next for the
      remaining ones, but reject this properly in verifier for the time
      being.
      
      Fixes: 17a52670 ("bpf: verifier (add verifier core)")
      Reported-by: syzbot+93c4904c5c70348a6890@syzkaller.appspotmail.com
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      7891a87e
  10. 08 1月, 2018 3 次提交
  11. 07 1月, 2018 2 次提交
  12. 04 1月, 2018 3 次提交
  13. 03 1月, 2018 2 次提交
  14. 31 12月, 2017 5 次提交
  15. 30 12月, 2017 2 次提交
  16. 28 12月, 2017 4 次提交
  17. 24 12月, 2017 2 次提交
    • T
      x86/ldt: Make the LDT mapping RO · 9f5cb6b3
      Thomas Gleixner 提交于
      Now that the LDT mapping is in a known area when PAGE_TABLE_ISOLATION is
      enabled its a primary target for attacks, if a user space interface fails
      to validate a write address correctly. That can never happen, right?
      
      The SDM states:
      
          If the segment descriptors in the GDT or an LDT are placed in ROM, the
          processor can enter an indefinite loop if software or the processor
          attempts to update (write to) the ROM-based segment descriptors. To
          prevent this problem, set the accessed bits for all segment descriptors
          placed in a ROM. Also, remove operating-system or executive code that
          attempts to modify segment descriptors located in ROM.
      
      So its a valid approach to set the ACCESS bit when setting up the LDT entry
      and to map the table RO. Fixup the selftest so it can handle that new mode.
      
      Remove the manual ACCESS bit setter in set_tls_desc() as this is now
      pointless. Folded the patch from Peter Ziljstra.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      9f5cb6b3
    • G
      bpf: fix stacksafe exploration when comparing states · fd05e57b
      Gianluca Borello 提交于
      Commit cc2b14d5 ("bpf: teach verifier to recognize zero initialized
      stack") introduced a very relaxed check when comparing stacks of different
      states, effectively returning a positive result in many cases where it
      shouldn't.
      
      This can create problems in cases such as this following C pseudocode:
      
      long var;
      long *x = bpf_map_lookup(...);
      if (!x)
              return;
      
      if (*x != 0xbeef)
              var = 0;
      else
              var = 1;
      
      /* This is the key part, calling a helper causes an explored state
       * to be saved with the information that "var" is on the stack as
       * STACK_ZERO, since the helper is first met by the verifier after
       * the "var = 0" assignment. This state will however be wrongly used
       * also for the "var = 1" case, so the verifier assumes "var" is always
       * 0 and will replace the NULL assignment with nops, because the
       * search pruning prevents it from exploring the faulty branch.
       */
      bpf_ktime_get_ns();
      
      if (var)
              *(long *)0 = 0xbeef;
      
      Fix the issue by making sure that the stack is fully explored before
      returning a positive comparison result.
      
      Also attach a couple tests that highlight the bad behavior. In the first
      test, without this fix instructions 16 and 17 are replaced with nops
      instead of being rejected by the verifier.
      
      The second test, instead, allows a program to make a potentially illegal
      read from the stack.
      
      Fixes: cc2b14d5 ("bpf: teach verifier to recognize zero initialized stack")
      Signed-off-by: NGianluca Borello <g.borello@gmail.com>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      fd05e57b
  18. 23 12月, 2017 1 次提交