1. 23 3月, 2016 18 次提交
  2. 22 3月, 2016 6 次提交
  3. 18 3月, 2016 16 次提交
    • J
      fs crypto: move per-file encryption from f2fs tree to fs/crypto · 0b81d077
      Jaegeuk Kim 提交于
      This patch adds the renamed functions moved from the f2fs crypto files.
      
      1. definitions for per-file encryption used by ext4 and f2fs.
      
      2. crypto.c for encrypt/decrypt functions
       a. IO preparation:
        - fscrypt_get_ctx / fscrypt_release_ctx
       b. before IOs:
        - fscrypt_encrypt_page
        - fscrypt_decrypt_page
        - fscrypt_zeroout_range
       c. after IOs:
        - fscrypt_decrypt_bio_pages
        - fscrypt_pullback_bio_page
        - fscrypt_restore_control_page
      
      3. policy.c supporting context management.
       a. For ioctls:
        - fscrypt_process_policy
        - fscrypt_get_policy
       b. For context permission
        - fscrypt_has_permitted_context
        - fscrypt_inherit_context
      
      4. keyinfo.c to handle permissions
        - fscrypt_get_encryption_info
        - fscrypt_free_encryption_info
      
      5. fname.c to support filename encryption
       a. general wrapper functions
        - fscrypt_fname_disk_to_usr
        - fscrypt_fname_usr_to_disk
        - fscrypt_setup_filename
        - fscrypt_free_filename
      
       b. specific filename handling functions
        - fscrypt_fname_alloc_buffer
        - fscrypt_fname_free_buffer
      
      6. Makefile and Kconfig
      
      Cc: Al Viro <viro@ftp.linux.org.uk>
      Signed-off-by: NMichael Halcrow <mhalcrow@google.com>
      Signed-off-by: NIldar Muslukhov <ildarm@google.com>
      Signed-off-by: NUday Savagaonkar <savagaon@google.com>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      0b81d077
    • K
      param: convert some "on"/"off" users to strtobool · 4cc7ecb7
      Kees Cook 提交于
      This changes several users of manual "on"/"off" parsing to use
      strtobool.
      
      Some side-effects:
      - these uses will now parse y/n/1/0 meaningfully too
      - the early_param uses will now bubble up parse errors
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Cc: Amitkumar Karwar <akarwar@marvell.com>
      Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: Joe Perches <joe@perches.com>
      Cc: Kalle Valo <kvalo@codeaurora.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Nishant Sarmukadam <nishants@marvell.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Steve French <sfrench@samba.org>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4cc7ecb7
    • K
      lib: move strtobool() to kstrtobool() · ef951599
      Kees Cook 提交于
      Create the kstrtobool_from_user() helper and move strtobool() logic into
      the new kstrtobool() (matching all the other kstrto* functions).
      Provides an inline wrapper for existing strtobool() callers.
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Cc: Joe Perches <joe@perches.com>
      Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: Amitkumar Karwar <akarwar@marvell.com>
      Cc: Nishant Sarmukadam <nishants@marvell.com>
      Cc: Kalle Valo <kvalo@codeaurora.org>
      Cc: Steve French <sfrench@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ef951599
    • D
      include/linux/unaligned: force inlining of byteswap operations · e3bde956
      Denys Vlasenko 提交于
      Sometimes gcc mysteriously doesn't inline
      very small functions we expect to be inlined. See
      
          https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66122
      
      With this .config:
      http://busybox.net/~vda/kernel_config_OPTIMIZE_INLINING_and_Os,
      the following functions get deinlined many times.
      Examples of disassembly:
      
      <get_unaligned_be16> (24 copies, 108 calls):
             66 8b 07                mov    (%rdi),%ax
             55                      push   %rbp
             48 89 e5                mov    %rsp,%rbp
             86 e0                   xchg   %ah,%al
             5d                      pop    %rbp
             c3                      retq
      
      <get_unaligned_be32> (25 copies, 181 calls):
             8b 07                   mov    (%rdi),%eax
             55                      push   %rbp
             48 89 e5                mov    %rsp,%rbp
             0f c8                   bswap  %eax
             5d                      pop    %rbp
             c3                      retq
      
      <get_unaligned_be64> (23 copies, 94 calls):
             48 8b 07                mov    (%rdi),%rax
             55                      push   %rbp
             48 89 e5                mov    %rsp,%rbp
             48 0f c8                bswap  %rax
             5d                      pop    %rbp
             c3                      retq
      
      <put_unaligned_be16> (2 copies, 11 calls):
             89 f8                   mov    %edi,%eax
             55                      push   %rbp
             c1 ef 08                shr    $0x8,%edi
             c1 e0 08                shl    $0x8,%eax
             09 c7                   or     %eax,%edi
             48 89 e5                mov    %rsp,%rbp
             66 89 3e                mov    %di,(%rsi)
      
      <put_unaligned_be32> (8 copies, 43 calls):
             55                      push   %rbp
             0f cf                   bswap  %edi
             89 3e                   mov    %edi,(%rsi)
             48 89 e5                mov    %rsp,%rbp
             5d                      pop    %rbp
             c3                      retq
      
      <put_unaligned_be64> (26 copies, 157 calls):
             55                      push   %rbp
             48 0f cf                bswap  %rdi
             48 89 3e                mov    %rdi,(%rsi)
             48 89 e5                mov    %rsp,%rbp
             5d                      pop    %rbp
             c3                      retq
      
      This patch fixes this via s/inline/__always_inline/.
      
      It only affects arches with efficient unaligned access insns, such as x86.
      (arched which lack such ops do not include linux/unaligned/access_ok.h)
      
      Code size decrease after the patch is ~8.5k:
      
          text     data      bss       dec     hex filename
      92197848 20826112 36417536 149441496 8e84bd8 vmlinux
      92189231 20826144 36417536 149432911 8e82a4f vmlinux6_unaligned_be_after
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Cc: Thomas Graf <tgraf@suug.ch>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e3bde956
    • A
      lib/string: introduce match_string() helper · 56b06081
      Andy Shevchenko 提交于
      Occasionally we have to search for an occurrence of a string in an array
      of strings.  Make a simple helper for that purpose.
      Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
      Cc: David Airlie <airlied@linux.ie>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Heikki Krogerus <heikki.krogerus@linux.intel.com>
      Cc: Linus Walleij <linus.walleij@linaro.org>
      Cc: Mika Westerberg <mika.westerberg@linux.intel.com>
      Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
      Cc: Sebastian Reichel <sre@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      56b06081
    • M
      radix-tree,shmem: introduce radix_tree_iter_next() · 7165092f
      Matthew Wilcox 提交于
      shmem likes to occasionally drop the lock, schedule, then reacqire the
      lock and continue with the iteration from the last place it left off.
      This is currently done with a pretty ugly goto.  Introduce
      radix_tree_iter_next() and use it throughout shmem.c.
      
      [koct9i@gmail.com: fix bug in radix_tree_iter_next() for tagged iteration]
      Signed-off-by: NMatthew Wilcox <willy@linux.intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: NKonstantin Khlebnikov <koct9i@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7165092f
    • M
      radix_tree: add support for multi-order entries · e6145236
      Matthew Wilcox 提交于
      With huge pages, it is convenient to have the radix tree be able to
      return an entry that covers multiple indices.  Previous attempts to deal
      with the problem have involved inserting N duplicate entries, which is a
      waste of memory and leads to problems trying to handle aliased tags, or
      probing the tree multiple times to find alternative entries which might
      cover the requested index.
      
      This approach inserts one canonical entry into the tree for a given
      range of indices, and may also insert other entries in order to ensure
      that lookups find the canonical entry.
      
      This solution only tolerates inserting powers of two that are greater
      than the fanout of the tree.  If we wish to expand the radix tree's
      abilities to support large-ish pages that is less than the fanout at the
      penultimate level of the tree, then we would need to add one more step
      in lookup to ensure that any sibling nodes in the final level of the
      tree are dereferenced and we return the canonical entry that they
      reference.
      Signed-off-by: NMatthew Wilcox <willy@linux.intel.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Matthew Wilcox <willy@linux.intel.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e6145236
    • M
      radix-tree: add an explicit include of bitops.h · f67c07f0
      Matthew Wilcox 提交于
      The radix-tree header uses the __ffs() function, which is defined in
      bitops.h.  The current kernel headers implicitly include bitops.h, but
      the userspace test harness does not.
      Signed-off-by: NMatthew Wilcox <willy@linux.intel.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Matthew Wilcox <willy@linux.intel.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f67c07f0
    • C
      include/linux/list_bl.h: use bool instead of int for boolean functions · 26a247fd
      Chen Gang 提交于
      hlist_bl_unhashed() and hlist_bl_empty() are all boolean functions, so
      return bool instead of int.
      Signed-off-by: NChen Gang <gang.chen.5i5j@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      26a247fd
    • C
      fix Christoph's email addresses · 93e205a7
      Christoph Lameter 提交于
      There are various email addresses for me throughout the kernel.  Use the
      one that will always be valid.
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      93e205a7
    • J
      timer: convert timer_slack_ns from unsigned long to u64 · da8b44d5
      John Stultz 提交于
      This patchset introduces a /proc/<pid>/timerslack_ns interface which
      would allow controlling processes to be able to set the timerslack value
      on other processes in order to save power by avoiding wakeups (Something
      Android currently does via out-of-tree patches).
      
      The first patch tries to fix the internal timer_slack_ns usage which was
      defined as a long, which limits the slack range to ~4 seconds on 32bit
      systems.  It converts it to a u64, which provides the same basically
      unlimited slack (500 years) on both 32bit and 64bit machines.
      
      The second patch introduces the /proc/<pid>/timerslack_ns interface
      which allows the full 64bit slack range for a task to be read or set on
      both 32bit and 64bit machines.
      
      With these two patches, on a 32bit machine, after setting the slack on
      bash to 10 seconds:
      
      $ time sleep 1
      
      real    0m10.747s
      user    0m0.001s
      sys     0m0.005s
      
      The first patch is a little ugly, since I had to chase the slack delta
      arguments through a number of functions converting them to u64s.  Let me
      know if it makes sense to break that up more or not.
      
      Other than that things are fairly straightforward.
      
      This patch (of 2):
      
      The timer_slack_ns value in the task struct is currently a unsigned
      long.  This means that on 32bit applications, the maximum slack is just
      over 4 seconds.  However, on 64bit machines, its much much larger (~500
      years).
      
      This disparity could make application development a little (as well as
      the default_slack) to a u64.  This means both 32bit and 64bit systems
      have the same effective internal slack range.
      
      Now the existing ABI via PR_GET_TIMERSLACK and PR_SET_TIMERSLACK specify
      the interface as a unsigned long, so we preserve that limitation on
      32bit systems, where SET_TIMERSLACK can only set the slack to a unsigned
      long value, and GET_TIMERSLACK will return ULONG_MAX if the slack is
      actually larger then what can be stored by an unsigned long.
      
      This patch also modifies hrtimer functions which specified the slack
      delta as a unsigned long.
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Oren Laadan <orenl@cellrox.com>
      Cc: Ruchi Kandoi <kandoiruchi@google.com>
      Cc: Rom Lemarchand <romlem@android.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Android Kernel Team <kernel-team@android.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      da8b44d5
    • K
      thp: rewrite freeze_page()/unfreeze_page() with generic rmap walkers · fec89c10
      Kirill A. Shutemov 提交于
      freeze_page() and unfreeze_page() helpers evolved in rather complex
      beasts.  It would be nice to cut complexity of this code.
      
      This patch rewrites freeze_page() using standard try_to_unmap().
      unfreeze_page() is rewritten with remove_migration_ptes().
      
      The result is much simpler.
      
      But the new variant is somewhat slower for PTE-mapped THPs.  Current
      helpers iterates over VMAs the compound page is mapped to, and then over
      ptes within this VMA.  New helpers iterates over small page, then over
      VMA the small page mapped to, and only then find relevant pte.
      
      We have short cut for PMD-mapped THP: we directly install migration
      entries on PMD split.
      
      I don't think the slowdown is critical, considering how much simpler
      result is and that split_huge_page() is quite rare nowadays.  It only
      happens due memory pressure or migration.
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fec89c10
    • K
      mm: make remove_migration_ptes() beyond mm/migration.c · e388466d
      Kirill A. Shutemov 提交于
      Make remove_migration_ptes() available to be used in split_huge_page().
      
      New parameter 'locked' added: as with try_to_umap() we need a way to
      indicate that caller holds rmap lock.
      
      We also shouldn't try to mlock() pte-mapped huge pages: pte-mapeed THP
      pages are never mlocked.
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e388466d
    • K
      rmap: extend try_to_unmap() to be usable by split_huge_page() · 2a52bcbc
      Kirill A. Shutemov 提交于
      Add support for two ttu_flags:
      
        - TTU_SPLIT_HUGE_PMD would split PMD if it's there, before trying to
          unmap page;
      
        - TTU_RMAP_LOCKED indicates that caller holds relevant rmap lock;
      
      Also, change rwc->done to !page_mapcount() instead of !page_mapped().
      try_to_unmap() works on pte level, so we are really interested in the
      mappedness of this small page rather than of the compound page it's a
      part of.
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2a52bcbc
    • K
      rmap: introduce rmap_walk_locked() · b9773199
      Kirill A. Shutemov 提交于
      This patchset rewrites freeze_page() and unfreeze_page() using
      try_to_unmap() and remove_migration_ptes().  Result is much simpler, but
      somewhat slower.
      
      Migration 8GiB worth of PMD-mapped THP:
      
        Baseline	20.21 +/- 0.393
        Patched	20.73 +/- 0.082
        Slowdown	1.03x
      
      It's 3% slower, comparing to 14% in v1.  I don't it should be a stopper.
      
      Splitting of PTE-mapped pages slowed more.  But this is not a common
      case.
      
      Migration 8GiB worth of PMD-mapped THP:
      
        Baseline	20.39 +/- 0.225
        Patched	22.43 +/- 0.496
        Slowdown	1.10x
      
      rmap_walk_locked() is the same as rmap_walk(), but the caller takes care
      of the relevant rmap lock.
      
      This is preparation for switching THP splitting from custom rmap walk in
      freeze_page()/unfreeze_page() to the generic one.
      
      There is no support for KSM pages for now: not clear which lock is
      implied.
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b9773199
    • J
      mm: remove VM_FAULT_MINOR · 0e8fb931
      Jan Kara 提交于
      The define has a comment from Nick Piggin from 2007:
      
       /* For backwards compat. Remove me quickly. */
      
      I guess 9 years should not be too hurried sense of 'quickly' even for
      kernel measures.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0e8fb931