1. 26 9月, 2016 15 次提交
    • L
      Btrfs: fix memory leak in reading btree blocks · 2571e739
      Liu Bo 提交于
      So we can read a btree block via readahead or intentional read,
      and we can end up with a memory leak when something happens as
      follows,
      1) readahead starts to read block A but does not wait for read
         completion,
      2) btree_readpage_end_io_hook finds that block A is corrupted,
         and it needs to clear all block A's pages' uptodate bit.
      3) meanwhile an intentional read kicks in and checks block A's
         pages' uptodate to decide which page needs to be read.
      4) when some pages have the uptodate bit during 3)'s check so
         3) doesn't count them for eb->io_pages, but they are later
         cleared by 2) so we has to readpage on the page, we get
         the wrong eb->io_pages which results in a memory leak of
         this block.
      
      This fixes the problem by firstly getting all pages's locking and
      then checking pages' uptodate bit.
      
         t1(readahead)                              t2(readahead endio)                                       t3(the following read)
      read_extent_buffer_pages                    end_bio_extent_readpage
        for pg in eb:                                for page 0,1,2 in eb:
            if pg is uptodate:                           btree_readpage_end_io_hook(pg)
                num_reads++                              if uptodate:
        eb->io_pages = num_reads                             SetPageUptodate(pg)              _______________
        for pg in eb:                                for page 3 in eb:                                     read_extent_buffer_pages
             if pg is NOT uptodate:                      btree_readpage_end_io_hook(pg)                       for pg in eb:
                 __extent_read_full_page(pg)                 sanity check reports something wrong                 if pg is uptodate:
                                                             clear_extent_buffer_uptodate(eb)                         num_reads++
                                                                 for pg in eb:                                eb->io_pages = num_reads
                                                                     ClearPageUptodate(page)  _______________
                                                                                                              for pg in eb:
                                                                                                                  if pg is NOT uptodate:
                                                                                                                      __extent_read_full_page(pg)
      
      So t3's eb->io_pages is not consistent with the number of pages it's reading,
      and during endio(), atomic_dec_and_test(&eb->io_pages) will get a negative
      number so that we're not able to free the eb.
      Signed-off-by: NLiu Bo <bo.li.liu@oracle.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      2571e739
    • L
      Btrfs: remove BUG() in raid56 · e46a28ca
      Liu Bo 提交于
      This BUG() has been triggered by a fuzz testing image, which contains
      an invalid chunk type, ie. a single stripe chunk has the raid6 type.
      
      Btrfs can handle this gracefully by returning -EIO, so besides using
      btrfs_warn to give us more debugging information rather than a single
      BUG(), we can return error properly.
      Signed-off-by: NLiu Bo <bo.li.liu@oracle.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      e46a28ca
    • L
      btrfs: fix check_shared for fiemap ioctl · afce772e
      Lu Fengqi 提交于
      Only in the case of different root_id or different object_id, check_shared
      identified extent as the shared. However, If a extent was referred by
      different offset of same file, it should also be identified as shared.
      In addition, check_shared's loop scale is at least n^3, so if a extent
      has too many references, even causes soft hang up.
      
      First, add all delayed_ref to the ref_tree and calculate the unqiue_refs,
      if the unique_refs is greater than one, return BACKREF_FOUND_SHARED.
      Then individually add the on-disk reference(inline/keyed) to the ref_tree
      and calculate the unique_refs of the ref_tree to check if the unique_refs
      is greater than one.Because once there are two references to return
      SHARED, so the time complexity is close to the constant.
      Reported-by: NTsutomu Itoh <t-itoh@jp.fujitsu.com>
      Signed-off-by: NLu Fengqi <lufq.fnst@cn.fujitsu.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      afce772e
    • D
      b0de6c4c
    • E
      btrfs: fix perms on demonstration debugfs interface · 07f6a480
      Eric Sandeen 提交于
      btrfs provides a helpful demonstration of how to export
      a global variable via debugfs; however, it is unique among
      other debugfs files in that it is world-writable, which causes
      some concern to people who are not familiar with its purpose.
      
      Fix it so that it is only user-writable.
      Signed-off-by: NEric Sandeen <sandeen@redhat.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      07f6a480
    • L
      Btrfs: fix memory leak of block group cache · c79a1751
      Liu Bo 提交于
      While processing delayed refs, we may update block group's statistics
      and attach it to cur_trans->dirty_bgs, and later writing dirty block
      groups will process the list, which happens during
      btrfs_commit_transaction().
      
      For whatever reason, the transaction is aborted and dirty_bgs
      is not processed in cleanup_transaction(), we end up with memory leak
      of these dirty block group cache.
      
      Since btrfs_start_dirty_block_groups() doesn't make it go to the commit
      critical section, this also adds the cleanup work inside it.
      Signed-off-by: NLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      c79a1751
    • L
      Linux 4.8-rc8 · 08895a8b
      Linus Torvalds 提交于
      08895a8b
    • L
      Merge tag 'trace-v4.8-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace · 4c04b4b5
      Linus Torvalds 提交于
      Pull tracefs fixes from Steven Rostedt:
       "Al Viro has been looking at the tracefs code, and has pointed out some
        issues.  This contains one fix by me and one by Al.  I'm sure that
        he'll come up with more but for now I tested these patches and they
        don't appear to have any negative impact on tracing"
      
      * tag 'trace-v4.8-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
        fix memory leaks in tracing_buffers_splice_read()
        tracing: Move mutex to protect against resetting of seq data
      4c04b4b5
    • D
      fault_in_multipages_readable() throws set-but-unused error · 90b75db6
      Dave Chinner 提交于
      When building XFS with -Werror, it now fails with:
      
        include/linux/pagemap.h: In function 'fault_in_multipages_readable':
        include/linux/pagemap.h:602:16: error: variable 'c' set but not used [-Werror=unused-but-set-variable]
          volatile char c;
                        ^
      
      This is a regression caused by commit e23d4159 ("fix
      fault_in_multipages_...() on architectures with no-op access_ok()").
      Fix it by re-adding the "(void)c" trick taht was previously used to make
      the compiler think the variable is used.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      90b75db6
    • L
      mm: check VMA flags to avoid invalid PROT_NONE NUMA balancing · 38e08854
      Lorenzo Stoakes 提交于
      The NUMA balancing logic uses an arch-specific PROT_NONE page table flag
      defined by pte_protnone() or pmd_protnone() to mark PTEs or huge page
      PMDs respectively as requiring balancing upon a subsequent page fault.
      User-defined PROT_NONE memory regions which also have this flag set will
      not normally invoke the NUMA balancing code as do_page_fault() will send
      a segfault to the process before handle_mm_fault() is even called.
      
      However if access_remote_vm() is invoked to access a PROT_NONE region of
      memory, handle_mm_fault() is called via faultin_page() and
      __get_user_pages() without any access checks being performed, meaning
      the NUMA balancing logic is incorrectly invoked on a non-NUMA memory
      region.
      
      A simple means of triggering this problem is to access PROT_NONE mmap'd
      memory using /proc/self/mem which reliably results in the NUMA handling
      functions being invoked when CONFIG_NUMA_BALANCING is set.
      
      This issue was reported in bugzilla (issue 99101) which includes some
      simple repro code.
      
      There are BUG_ON() checks in do_numa_page() and do_huge_pmd_numa_page()
      added at commit c0e7cad9 to avoid accidentally provoking strange
      behaviour by attempting to apply NUMA balancing to pages that are in
      fact PROT_NONE.  The BUG_ON()'s are consistently triggered by the repro.
      
      This patch moves the PROT_NONE check into mm/memory.c rather than
      invoking BUG_ON() as faulting in these pages via faultin_page() is a
      valid reason for reaching the NUMA check with the PROT_NONE page table
      flag set and is therefore not always a bug.
      
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=99101Reported-by: NTrevor Saunders <tbsaunde@tbsaunde.org>
      Signed-off-by: NLorenzo Stoakes <lstoakes@gmail.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      38e08854
    • L
      Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus · 831e45d8
      Linus Torvalds 提交于
      Pull MIPS fixes from Ralf Baechle:
       "A round of 4.8 fixes:
      
        MIPS generic code:
         - Add a missing ".set pop" in an early commit
         - Fix memory regions reaching top of physical
         - MAAR: Fix address alignment
         - vDSO: Fix Malta EVA mapping to vDSO page structs
         - uprobes: fix incorrect uprobe brk handling
         - uprobes: select HAVE_REGS_AND_STACK_ACCESS_API
         - Avoid a BUG warning during PR_SET_FP_MODE prctl
         - SMP: Fix possibility of deadlock when bringing CPUs online
         - R6: Remove compact branch policy Kconfig entries
         - Fix size calc when avoiding IPIs for small icache flushes
         - Fix pre-r6 emulation FPU initialisation
         - Fix delay slot emulation count in debugfs
      
        ATH79:
         - Fix test for error return of clk_register_fixed_factor.
      
        Octeon:
         - Fix kernel header to work for VDSO build.
         - Fix initialization of platform device probing.
      
        paravirt:
         - Fix undefined reference to smp_bootstrap"
      
      * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus:
        MIPS: Fix delay slot emulation count in debugfs
        MIPS: SMP: Fix possibility of deadlock when bringing CPUs online
        MIPS: Fix pre-r6 emulation FPU initialisation
        MIPS: vDSO: Fix Malta EVA mapping to vDSO page structs
        MIPS: Select HAVE_REGS_AND_STACK_ACCESS_API
        MIPS: Octeon: Fix platform bus probing
        MIPS: Octeon: mangle-port: fix build failure with VDSO code
        MIPS: Avoid a BUG warning during prctl(PR_SET_FP_MODE, ...)
        MIPS: c-r4k: Fix size calc when avoiding IPIs for small icache flushes
        MIPS: Add a missing ".set pop" in an early commit
        MIPS: paravirt: Fix undefined reference to smp_bootstrap
        MIPS: Remove compact branch policy Kconfig entries
        MIPS: MAAR: Fix address alignment
        MIPS: Fix memory regions reaching top of physical
        MIPS: uprobes: fix incorrect uprobe brk handling
        MIPS: ath79: Fix test for error return of clk_register_fixed_factor().
      831e45d8
    • L
      Merge tag 'powerpc-4.8-7' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux · 751b9a5d
      Linus Torvalds 提交于
      Pull one more powerpc fix from Michael Ellerman:
       "powernv/pci: Fix m64 checks for SR-IOV and window alignment from
        Russell Currey"
      
      * tag 'powerpc-4.8-7' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
        powerpc/powernv/pci: Fix m64 checks for SR-IOV and window alignment
      751b9a5d
    • L
      radix tree: fix sibling entry handling in radix_tree_descend() · 8d2c0d36
      Linus Torvalds 提交于
      The fixes to the radix tree test suite show that the multi-order case is
      broken.  The basic reason is that the radix tree code uses tagged
      pointers with the "internal" bit in the low bits, and calculating the
      pointer indices was supposed to mask off those bits.  But gcc will
      notice that we then use the index to re-create the pointer, and will
      avoid doing the arithmetic and use the tagged pointer directly.
      
      This cleans the code up, using the existing is_sibling_entry() helper to
      validate the sibling pointer range (instead of open-coding it), and
      using entry_to_node() to mask off the low tag bit from the pointer.  And
      once you do that, you might as well just use the now cleaned-up pointer
      directly.
      
      [ Side note: the multi-order code isn't actually ever used in the kernel
        right now, and the only reason I didn't just delete all that code is
        that Kirill Shutemov piped up and said:
      
          "Well, my ext4-with-huge-pages patchset[1] uses multi-order entries.
           It also converts shmem-with-huge-pages and hugetlb to them.
      
           I'm okay with converting it to other mechanism, but I need
           something.  (I looked into Konstantin's RFC patchset[2].  It looks
           okay, but I don't feel myself qualified to review it as I don't
           know much about radix-tree internals.)"
      
        [1] http://lkml.kernel.org/r/20160915115523.29737-1-kirill.shutemov@linux.intel.com
        [2] http://lkml.kernel.org/r/147230727479.9957.1087787722571077339.stgit@zurg ]
      Reported-by: NMatthew Wilcox <mawilcox@microsoft.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Cedric Blancher <cedric.blancher@gmail.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8d2c0d36
    • M
      radix tree test suite: Test radix_tree_replace_slot() for multiorder entries · 62fd5258
      Matthew Wilcox 提交于
      When we replace a multiorder entry, check that all indices reflect the
      new value.
      
      Also, compile the test suite with -O2, which shows other problems with
      the code due to some dodgy pointer operations in the radix tree code.
      Signed-off-by: NMatthew Wilcox <mawilcox@microsoft.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      62fd5258
    • A
      fix memory leaks in tracing_buffers_splice_read() · 1ae2293d
      Al Viro 提交于
      Cc: stable@vger.kernel.org
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      1ae2293d
  2. 25 9月, 2016 11 次提交
  3. 24 9月, 2016 12 次提交
  4. 23 9月, 2016 2 次提交
    • A
      arm64: kgdb: handle read-only text / modules · 67787b68
      AKASHI Takahiro 提交于
      Handle read-only cases when CONFIG_DEBUG_RODATA (4.0) or
      CONFIG_DEBUG_SET_MODULE_RONX (3.18) are enabled by using
      aarch64_insn_write() instead of probe_kernel_write() as introduced by
      commit 2f896d58 ("arm64: use fixmap for text patching") in 4.0.
      
      Fixes: 11d91a77 ("arm64: Add CONFIG_DEBUG_SET_MODULE_RONX support")
      Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Jason Wessel <jason.wessel@windriver.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      67787b68
    • D
      arm64: Call numa_store_cpu_info() earlier. · c18df0ad
      David Daney 提交于
      The wq_numa_init() function makes a private CPU to node map by calling
      cpu_to_node() early in the boot process, before the non-boot CPUs are
      brought online.  Since the default implementation of cpu_to_node()
      returns zero for CPUs that have never been brought online, the
      workqueue system's view is that *all* CPUs are on node zero.
      
      When the unbound workqueue for a non-zero node is created, the
      tsk_cpus_allowed() for the worker threads is the empty set because
      there are, in the view of the workqueue system, no CPUs on non-zero
      nodes.  The code in try_to_wake_up() using this empty cpumask ends up
      using the cpumask empty set value of NR_CPUS as an index into the
      per-CPU area pointer array, and gets garbage as it is one past the end
      of the array.  This results in:
      
      [    0.881970] Unable to handle kernel paging request at virtual address fffffb1008b926a4
      [    1.970095] pgd = fffffc00094b0000
      [    1.973530] [fffffb1008b926a4] *pgd=0000000000000000, *pud=0000000000000000, *pmd=0000000000000000
      [    1.982610] Internal error: Oops: 96000004 [#1] SMP
      [    1.987541] Modules linked in:
      [    1.990631] CPU: 48 PID: 295 Comm: cpuhp/48 Tainted: G        W       4.8.0-rc6-preempt-vol+ #9
      [    1.999435] Hardware name: Cavium ThunderX CN88XX board (DT)
      [    2.005159] task: fffffe0fe89cc300 task.stack: fffffe0fe8b8c000
      [    2.011158] PC is at try_to_wake_up+0x194/0x34c
      [    2.015737] LR is at try_to_wake_up+0x150/0x34c
      [    2.020318] pc : [<fffffc00080e7468>] lr : [<fffffc00080e7424>] pstate: 600000c5
      [    2.027803] sp : fffffe0fe8b8fb10
      [    2.031149] x29: fffffe0fe8b8fb10 x28: 0000000000000000
      [    2.036522] x27: fffffc0008c63bc8 x26: 0000000000001000
      [    2.041896] x25: fffffc0008c63c80 x24: fffffc0008bfb200
      [    2.047270] x23: 00000000000000c0 x22: 0000000000000004
      [    2.052642] x21: fffffe0fe89d25bc x20: 0000000000001000
      [    2.058014] x19: fffffe0fe89d1d00 x18: 0000000000000000
      [    2.063386] x17: 0000000000000000 x16: 0000000000000000
      [    2.068760] x15: 0000000000000018 x14: 0000000000000000
      [    2.074133] x13: 0000000000000000 x12: 0000000000000000
      [    2.079505] x11: 0000000000000000 x10: 0000000000000000
      [    2.084879] x9 : 0000000000000000 x8 : 0000000000000000
      [    2.090251] x7 : 0000000000000040 x6 : 0000000000000000
      [    2.095621] x5 : ffffffffffffffff x4 : 0000000000000000
      [    2.100991] x3 : 0000000000000000 x2 : 0000000000000000
      [    2.106364] x1 : fffffc0008be4c24 x0 : ffffff0ffffada80
      [    2.111737]
      [    2.113236] Process cpuhp/48 (pid: 295, stack limit = 0xfffffe0fe8b8c020)
      [    2.120102] Stack: (0xfffffe0fe8b8fb10 to 0xfffffe0fe8b90000)
      [    2.125914] fb00:                                   fffffe0fe8b8fb80 fffffc00080e7648
      .
      .
      .
      [    2.442859] Call trace:
      [    2.445327] Exception stack(0xfffffe0fe8b8f940 to 0xfffffe0fe8b8fa70)
      [    2.451843] f940: fffffe0fe89d1d00 0000040000000000 fffffe0fe8b8fb10 fffffc00080e7468
      [    2.459767] f960: fffffe0fe8b8f980 fffffc00080e4958 ffffff0ff91ab200 fffffc00080e4b64
      [    2.467690] f980: fffffe0fe8b8f9d0 fffffc00080e515c fffffe0fe8b8fa80 0000000000000000
      [    2.475614] f9a0: fffffe0fe8b8f9d0 fffffc00080e58e4 fffffe0fe8b8fa80 0000000000000000
      [    2.483540] f9c0: fffffe0fe8d10000 0000000000000040 fffffe0fe8b8fa50 fffffc00080e5ac4
      [    2.491465] f9e0: ffffff0ffffada80 fffffc0008be4c24 0000000000000000 0000000000000000
      [    2.499387] fa00: 0000000000000000 ffffffffffffffff 0000000000000000 0000000000000040
      [    2.507309] fa20: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
      [    2.515233] fa40: 0000000000000000 0000000000000000 0000000000000000 0000000000000018
      [    2.523156] fa60: 0000000000000000 0000000000000000
      [    2.528089] [<fffffc00080e7468>] try_to_wake_up+0x194/0x34c
      [    2.533723] [<fffffc00080e7648>] wake_up_process+0x28/0x34
      [    2.539275] [<fffffc00080d3764>] create_worker+0x110/0x19c
      [    2.544824] [<fffffc00080d69dc>] alloc_unbound_pwq+0x3cc/0x4b0
      [    2.550724] [<fffffc00080d6bcc>] wq_update_unbound_numa+0x10c/0x1e4
      [    2.557066] [<fffffc00080d7d78>] workqueue_online_cpu+0x220/0x28c
      [    2.563234] [<fffffc00080bd288>] cpuhp_invoke_callback+0x6c/0x168
      [    2.569398] [<fffffc00080bdf74>] cpuhp_up_callbacks+0x44/0xe4
      [    2.575210] [<fffffc00080be194>] cpuhp_thread_fun+0x13c/0x148
      [    2.581027] [<fffffc00080dfbac>] smpboot_thread_fn+0x19c/0x1a8
      [    2.586929] [<fffffc00080dbd64>] kthread+0xdc/0xf0
      [    2.591776] [<fffffc0008083380>] ret_from_fork+0x10/0x50
      [    2.597147] Code: b00057e1 91304021 91005021 b8626822 (b8606821)
      [    2.603464] ---[ end trace 58c0cd36b88802bc ]---
      [    2.608138] Kernel panic - not syncing: Fatal exception
      
      Fix by moving call to numa_store_cpu_info() for all CPUs into
      smp_prepare_cpus(), which happens before wq_numa_init().  Since
      smp_store_cpu_info() now contains only a single function call,
      simplify by removing the function and out-lining its contents.
      Suggested-by: NRobert Richter <rric@kernel.org>
      Fixes: 1a2db300 ("arm64, numa: Add NUMA support for arm64 platforms.")
      Cc: <stable@vger.kernel.org> # 4.7.x-
      Signed-off-by: NDavid Daney <david.daney@cavium.com>
      Reviewed-by: NRobert Richter <rrichter@cavium.com>
      Tested-by: NYisheng Xie <xieyisheng1@huawei.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c18df0ad