1. 18 11月, 2017 9 次提交
  2. 17 11月, 2017 6 次提交
  3. 16 11月, 2017 25 次提交
    • M
      sh: decompressor: add shipped files to .gitignore · 52c291a3
      Masahiro Yamada 提交于
      These files are copied from arch/sh/lib, so should be ignored by git.
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      52c291a3
    • M
      380a1edb
    • H
      s390: remove unused parameter from Makefile · ab35727e
      Heiko Carstens 提交于
      Remove unused parameter from the call function,
      which I accidentally added.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      ab35727e
    • H
      s390/cpum_sf: correctly set the PID and TID in perf samples · 544e8dd7
      Hendrik Brueckner 提交于
      The hardware sampler creates samples that are processed at a later
      point in time.  The PID and TID values of the perf samples that are
      created for hardware samples are initialized with values from the
      current task.  Hence, the PID and TID values are not correct and
      perf samples are associated with wrong processes.
      
      The PID and TID values are obtained from the Host Program Parameter
      (HPP) field in the basic-sampling data entries.  These PIDs are
      valid in the init PID namespace.  Ensure that the PIDs in the perf
      samples are resolved considering the PID namespace in which the
      perf event was created.
      
      To correct the PID and TID values in the created perf samples,
      a special overflow handler is installed.  It replaces the default
      overflow handler and does not become effective if any other
      overflow handler is used.  With the special overflow handler most
      of the perf samples are associated with the right processes.
      For processes, that are no longer exist, the association might
      still be wrong.
      Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      544e8dd7
    • H
      s390/cpum_sf: load program parameter at sampler enablement · d4c7e649
      Hendrik Brueckner 提交于
      The lpp instruction is used to place the PID of the current
      task in the program-parameter (PP) register.  The register
      contents is then included in the sampling data entries.
      
      The lpp instruction loads the PP register only when at least
      one sampling function is enabled.  Otherwise it is executed
      as a no-op.
      
      Linux calls lpp at context switch.  If the context switch
      happens before the sampler is enabled, the PP register is
      empty.  That means, the PID of the task that is sampled is
      not stored in sampling data until the next context switch.
      
      Hence, always call lpp when enabling the sampler.
      Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      d4c7e649
    • H
      s390/perf: extend perf_regs support to include floating-point registers · 0da0017f
      Hendrik Brueckner 提交于
      Extend the perf register support to also export floating-point register
      contents for user space tasks.  Floating-point registers might be used
      in leaf functions to contain the return address.  Hence, they are required
      for proper DWARF unwinding.
      Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com>
      Reviewed-and-tested-by: NThomas Richter <tmricht@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      0da0017f
    • H
      s390/perf: add perf_regs support and user stack dump · c33eff60
      Heiko Carstens 提交于
      Add s390 support to dump user stack to user space for DWARF
      stack unwinding.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Reviewed-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com>
      Reviewed-and-tested-by: NThomas Richter <tmricht@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      c33eff60
    • H
      s390/cpum_sf: do not register PMU if no sampling mode is authorized · 9232c3c7
      Hendrik Brueckner 提交于
      Previously, the cpum_sf PMU was registered even if there is no
      sampling mode authorized.  Add a check and register cpum_sf only
      at least one sampling mode is authorized.
      Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      9232c3c7
    • P
      s390/cpumf: remove raw event support in basic-only sampling mode · 3d43b981
      Pu Hou 提交于
      Raw sample was implemented to export the diagnostic samples.
      With having this achieved with AUX buffers, there is no requirement
      for basic samples to export raw data.  In particular, most basic
      sampling information are consumed for creating the perf event sample.
      Signed-off-by: NPu Hou <bjhoupu@linux.vnet.ibm.com>
      Reviewed-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      3d43b981
    • P
      s390/cpumf: enable using AUX buffer · cbf6948f
      Pu Hou 提交于
      Modify PMU callback to use AUX buffer for diagnostic mode sampling.
      Basic-mode sampling still use orignal way.
      Signed-off-by: NPu Hou <bjhoupu@linux.vnet.ibm.com>
      Reviewed-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      cbf6948f
    • P
      s390/cpumf: introduce AUX buffer for dump diagnostic sample data · ca5955cd
      Pu Hou 提交于
      Current implementation uses a private buffer for cpumf to dump samples.
      Samples first go to this buffer. Then copy to ring buffer allocated
      by perf core. With AUX buffer, this copy is not needed. AUX buffer is
      shared and zero-copy mapped to user space. The trailer information at
      the end of each SDB(sample data block) is also exported to user space.
      AUX buffer is used when diagnostic sampling mode is enabled.
      
      This patch contains functions to setup/free AUX buffer or to begin/end
      sampling per-cpu. Also include function called in interrupt to
      collect samples.
      Signed-off-by: NPu Hou <bjhoupu@linux.vnet.ibm.com>
      Reviewed-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      ca5955cd
    • V
      s390/disassembler: increase show_code buffer size · b192571d
      Vasily Gorbik 提交于
      Current buffer size of 64 is too small. objdump shows that there are
      instructions which would require up to 75 bytes buffer (with current
      formating). 128 bytes "ought to be enough for anybody".
      
      Also replaces 8 spaces with a single tab to reduce the memory footprint.
      
      Fixes the following KASAN finding:
      
      BUG: KASAN: stack-out-of-bounds in number+0x3fe/0x538
      Write of size 1 at addr 000000005a4a75a0 by task bash/1282
      
      CPU: 1 PID: 1282 Comm: bash Not tainted 4.14.0+ #215
      Hardware name: IBM 2964 N96 702 (z/VM 6.4.0)
      Call Trace:
      ([<000000000011eeb6>] show_stack+0x56/0x88)
       [<0000000000e1ce1a>] dump_stack+0x15a/0x1b0
       [<00000000004e2994>] print_address_description+0xf4/0x288
       [<00000000004e2cf2>] kasan_report+0x13a/0x230
       [<0000000000e38ae6>] number+0x3fe/0x538
       [<0000000000e3dfe4>] vsnprintf+0x194/0x948
       [<0000000000e3ea42>] sprintf+0xa2/0xb8
       [<00000000001198dc>] print_insn+0x374/0x500
       [<0000000000119346>] show_code+0x4ee/0x538
       [<000000000011f234>] show_registers+0x34c/0x388
       [<000000000011f2ae>] show_regs+0x3e/0xa8
       [<000000000011f502>] die+0x1ea/0x2e8
       [<0000000000138f0e>] do_no_context+0x106/0x168
       [<0000000000139a1a>] do_protection_exception+0x4da/0x7d0
       [<0000000000e55914>] pgm_check_handler+0x16c/0x1c0
       [<000000000090639e>] sysrq_handle_crash+0x46/0x58
      ([<0000000000000007>] 0x7)
       [<00000000009073fa>] __handle_sysrq+0x102/0x218
       [<0000000000907c06>] write_sysrq_trigger+0xd6/0x100
       [<000000000061d67a>] proc_reg_write+0xb2/0x128
       [<0000000000520be6>] __vfs_write+0xee/0x368
       [<0000000000521222>] vfs_write+0x21a/0x278
       [<000000000052156a>] SyS_write+0xda/0x178
       [<0000000000e555cc>] system_call+0xc4/0x270
      
      The buggy address belongs to the page:
      page:000003d1016929c0 count:0 mapcount:0 mapping:          (null) index:0x0
      flags: 0x0()
      raw: 0000000000000000 0000000000000000 0000000000000000 ffffffff00000000
      raw: 0000000000000100 0000000000000200 0000000000000000 0000000000000000
      page dumped because: kasan: bad access detected
      
      Memory state around the buggy address:
       000000005a4a7480: 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1
       000000005a4a7500: 00 00 00 00 00 00 00 00 f2 f2 f2 f2 00 00 00 00
      >000000005a4a7580: 00 00 00 00 f3 f3 f3 f3 00 00 00 00 00 00 00 00
                                     ^
       000000005a4a7600: 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1 f8 f8
       000000005a4a7680: f2 f2 f2 f2 f2 f2 f8 f8 f2 f2 f3 f3 f3 f3 00 00
      ==================================================================
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NVasily Gorbik <gor@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      b192571d
    • M
      s390: Remove CONFIG_HARDENED_USERCOPY · 6470c0cc
      Michael Holzheu 提交于
      When running the crash tool on a s390 live system we get a kernel panic
      for reading memory within the kernel image:
      
       # uname -a
         Linux r3545011 4.14.0-rc8-00066-g1c9dbd46 #45 SMP PREEMPT Fri Nov 10 16:16:22 CET 2017 s390x s390x s390x GNU/Linux
       # crash /boot/vmlinux-devel /dev/mem
       # crash> rd 0x100000
      
       usercopy: kernel memory exposure attempt detected from 0000000000100000 (<kernel text>) (8 bytes)
       ------------[ cut here ]------------
       kernel BUG at mm/usercopy.c:72!
       illegal operation: 0001 ilc:1 [#1] PREEMPT SMP.
       Modules linked in:
       CPU: 0 PID: 1461 Comm: crash Not tainted 4.14.0-rc8-00066-g1c9dbd46-dirty #46
       Hardware name: IBM 2827 H66 706 (z/VM 6.3.0)
       task: 000000001ad10100 task.stack: 000000001df78000
       Krnl PSW : 0704d00180000000 000000000038165c (__check_object_size+0x164/0x1d0)
                  R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:1 PM:0 RI:0 EA:3
       Krnl GPRS: 0000000012440e1d 0000000080000000 0000000000000061 00000000001cabc0
                  00000000001cc6d6 0000000000000000 0000000000cc4ed2 0000000000001000
                  000003ffc22fdd20 0000000000000008 0000000000100008 0000000000000001
                  0000000000000008 0000000000100000 0000000000381658 000000001df7bc90
       Krnl Code: 000000000038164c: c020004a1c4a        larl    %r2,cc4ee0
                  0000000000381652: c0e5fff2581b        brasl   %r14,1cc688
                 #0000000000381658: a7f40001            brc     15,38165a
                 >000000000038165c: eb42000c000c        srlg    %r4,%r2,12
                  0000000000381662: eb32001c000c        srlg    %r3,%r2,28
                  0000000000381668: c0110003ffff        lgfi    %r1,262143
                  000000000038166e: ec31ff752065        clgrj   %r3,%r1,2,381558
                  0000000000381674: a7f4ff67            brc     15,381542
       Call Trace:
       ([<0000000000381658>] __check_object_size+0x160/0x1d0)
        [<000000000082263a>] read_mem+0xaa/0x130.
        [<0000000000386182>] __vfs_read+0x42/0x168.
        [<000000000038632e>] vfs_read+0x86/0x140.
        [<0000000000386a26>] SyS_read+0x66/0xc0.
        [<0000000000ace6a4>] system_call+0xc4/0x2b0.
       INFO: lockdep is turned off.
       Last Breaking-Event-Address:
        [<0000000000381658>] __check_object_size+0x160/0x1d0
      
       Kernel panic - not syncing: Fatal exception: panic_on_oops
      
      With CONFIG_HARDENED_USERCOPY copy_to_user() checks in __check_object_size()
      if the source address is within the kernel image. When the crash tool reads
      from 0x100000, this check leads to the kernel BUG().
      
      So disable the kernel config option until this bug is fixed.
      
      Corresponding bug report on LKML: https://lkml.org/lkml/2017/11/10/341Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      6470c0cc
    • M
      mm, sparse: do not swamp log with huge vmemmap allocation failures · fcdaf842
      Michal Hocko 提交于
      While doing memory hotplug tests under heavy memory pressure we have
      noticed too many page allocation failures when allocating vmemmap memmap
      backed by huge page
      
        kworker/u3072:1: page allocation failure: order:9, mode:0x24084c0(GFP_KERNEL|__GFP_REPEAT|__GFP_ZERO)
        [...]
        Call Trace:
          dump_trace+0x59/0x310
          show_stack_log_lvl+0xea/0x170
          show_stack+0x21/0x40
          dump_stack+0x5c/0x7c
          warn_alloc_failed+0xe2/0x150
          __alloc_pages_nodemask+0x3ed/0xb20
          alloc_pages_current+0x7f/0x100
          vmemmap_alloc_block+0x79/0xb6
          __vmemmap_alloc_block_buf+0x136/0x145
          vmemmap_populate+0xd2/0x2b9
          sparse_mem_map_populate+0x23/0x30
          sparse_add_one_section+0x68/0x18e
          __add_pages+0x10a/0x1d0
          arch_add_memory+0x4a/0xc0
          add_memory_resource+0x89/0x160
          add_memory+0x6d/0xd0
          acpi_memory_device_add+0x181/0x251
          acpi_bus_attach+0xfd/0x19b
          acpi_bus_scan+0x59/0x69
          acpi_device_hotplug+0xd2/0x41f
          acpi_hotplug_work_fn+0x1a/0x23
          process_one_work+0x14e/0x410
          worker_thread+0x116/0x490
          kthread+0xbd/0xe0
          ret_from_fork+0x3f/0x70
      
      and we do see many of those because essentially every allocation fails
      for each memory section.  This is an excessive way to tell the user that
      there is nothing to really worry about because we do have a fallback
      mechanism to use base pages.  The only downside might be a performance
      degradation due to TLB pressure.
      
      This patch changes vmemmap_alloc_block() to use __GFP_NOWARN and warn
      explicitly once on the first allocation failure.  This will reduce the
      noise in the kernel log considerably, while we still have an indication
      that a performance might be impacted.
      
      [mhocko@kernel.org: forgot to git add the follow up fix]
        Link: http://lkml.kernel.org/r/20171107090635.c27thtse2lchjgvb@dhcp22.suse.cz
      Link: http://lkml.kernel.org/r/20171106092228.31098-1-mhocko@kernel.orgSigned-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NMichal Hocko <mhocko@suse.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Khalid Aziz <khalid.aziz@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fcdaf842
    • M
      mm: remove cold parameter from free_hot_cold_page* · 2d4894b5
      Mel Gorman 提交于
      Most callers users of free_hot_cold_page claim the pages being released
      are cache hot.  The exception is the page reclaim paths where it is
      likely that enough pages will be freed in the near future that the
      per-cpu lists are going to be recycled and the cache hotness information
      is lost.  As no one really cares about the hotness of pages being
      released to the allocator, just ditch the parameter.
      
      The APIs are renamed to indicate that it's no longer about hot/cold
      pages.  It should also be less confusing as there are subtle differences
      between them.  __free_pages drops a reference and frees a page when the
      refcount reaches zero.  free_hot_cold_page handled pages whose refcount
      was already zero which is non-obvious from the name.  free_unref_page
      should be more obvious.
      
      No performance impact is expected as the overhead is marginal.  The
      parameter is removed simply because it is a bit stupid to have a useless
      parameter copied everywhere.
      
      [mgorman@techsingularity.net: add pages to head, not tail]
        Link: http://lkml.kernel.org/r/20171019154321.qtpzaeftoyyw4iey@techsingularity.net
      Link: http://lkml.kernel.org/r/20171018075952.10627-8-mgorman@techsingularity.netSigned-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2d4894b5
    • P
      sparc64: optimize struct page zeroing · 78c94366
      Pavel Tatashin 提交于
      Add an optimized mm_zero_struct_page(), so struct page's are zeroed
      without calling memset().  We do eight to ten regular stores based on
      the size of struct page.  Compiler optimizes out the conditions of
      switch() statement.
      
      SPARC-M6 with 15T of memory, single thread performance:
      
                                     BASE            FIX  OPTIMIZED_FIX
              bootmem_init   28.440467985s   2.305674818s   2.305161615s
      free_area_init_nodes  202.845901673s 225.343084508s 172.556506560s
                            --------------------------------------------
      Total                 231.286369658s 227.648759326s 174.861668175s
      
      BASE:  current linux
      FIX:   This patch series without "optimized struct page zeroing"
      OPTIMIZED_FIX: This patch series including the current patch.
      
      bootmem_init() is where memory for struct pages is zeroed during
      allocation.  Note, about two seconds in this function is a fixed time:
      it does not increase as memory is increased.
      
      Link: http://lkml.kernel.org/r/20171013173214.27300-11-pasha.tatashin@oracle.comSigned-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Reviewed-by: NSteven Sistare <steven.sistare@oracle.com>
      Reviewed-by: NDaniel Jordan <daniel.m.jordan@oracle.com>
      Reviewed-by: NBob Picco <bob.picco@oracle.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      78c94366
    • W
      arm64/mm/kasan: don't use vmemmap_populate() to initialize shadow · e17d8025
      Will Deacon 提交于
      The kasan shadow is currently mapped using vmemmap_populate() since that
      provides a semi-convenient way to map pages into init_top_pgt.  However,
      since that no longer zeroes the mapped pages, it is not suitable for
      kasan, which requires zeroed shadow memory.
      
      Add kasan_populate_shadow() interface and use it instead of
      vmemmap_populate().  Besides, this allows us to take advantage of
      gigantic pages and use them to populate the shadow, which should save us
      some memory wasted on page tables and reduce TLB pressure.
      
      Link: http://lkml.kernel.org/r/20171103185147.2688-3-pasha.tatashin@oracle.comSigned-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Steven Sistare <steven.sistare@oracle.com>
      Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
      Cc: Bob Picco <bob.picco@oracle.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e17d8025
    • A
      x86/mm/kasan: don't use vmemmap_populate() to initialize shadow · d17a1d97
      Andrey Ryabinin 提交于
      The kasan shadow is currently mapped using vmemmap_populate() since that
      provides a semi-convenient way to map pages into init_top_pgt.  However,
      since that no longer zeroes the mapped pages, it is not suitable for
      kasan, which requires zeroed shadow memory.
      
      Add kasan_populate_shadow() interface and use it instead of
      vmemmap_populate().  Besides, this allows us to take advantage of
      gigantic pages and use them to populate the shadow, which should save us
      some memory wasted on page tables and reduce TLB pressure.
      
      Link: http://lkml.kernel.org/r/20171103185147.2688-2-pasha.tatashin@oracle.comSigned-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Signed-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Cc: Steven Sistare <steven.sistare@oracle.com>
      Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
      Cc: Bob Picco <bob.picco@oracle.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d17a1d97
    • P
      sparc64: simplify vmemmap_populate · df8ee578
      Pavel Tatashin 提交于
      Remove duplicating code by using common functions vmemmap_pud_populate
      and vmemmap_pgd_populate.
      
      Link: http://lkml.kernel.org/r/20171013173214.27300-5-pasha.tatashin@oracle.comSigned-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Reviewed-by: NSteven Sistare <steven.sistare@oracle.com>
      Reviewed-by: NDaniel Jordan <daniel.m.jordan@oracle.com>
      Reviewed-by: NBob Picco <bob.picco@oracle.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      df8ee578
    • P
      sparc64/mm: set fields in deferred pages · 2a20aa17
      Pavel Tatashin 提交于
      Without deferred struct page feature (CONFIG_DEFERRED_STRUCT_PAGE_INIT),
      flags and other fields in "struct page"es are never changed prior to
      first initializing struct pages by going through __init_single_page().
      
      With deferred struct page feature enabled there is a case where we set
      some fields prior to initializing:
      
      mem_init() {
           register_page_bootmem_info();
           free_all_bootmem();
           ...
      }
      
      When register_page_bootmem_info() is called only non-deferred struct
      pages are initialized.  But, this function goes through some reserved
      pages which might be part of the deferred, and thus are not yet
      initialized.
      
      mem_init
      register_page_bootmem_info
      register_page_bootmem_info_node
       get_page_bootmem
        .. setting fields here ..
        such as: page->freelist = (void *)type;
      
      free_all_bootmem()
      free_low_memory_core_early()
       for_each_reserved_mem_region()
        reserve_bootmem_region()
         init_reserved_page() <- Only if this is deferred reserved page
          __init_single_pfn()
           __init_single_page()
            memset(0) <-- Loose the set fields here
      
      We end up with similar issue as in the previous patch, where currently
      we do not observe problem as memory is zeroed.  But, if flag asserts are
      changed we can start hitting issues.
      
      Also, because in this patch series we will stop zeroing struct page
      memory during allocation, we must make sure that struct pages are
      properly initialized prior to using them.
      
      The deferred-reserved pages are initialized in free_all_bootmem().
      Therefore, the fix is to switch the above calls.
      
      Link: http://lkml.kernel.org/r/20171013173214.27300-4-pasha.tatashin@oracle.comSigned-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Reviewed-by: NSteven Sistare <steven.sistare@oracle.com>
      Reviewed-by: NDaniel Jordan <daniel.m.jordan@oracle.com>
      Reviewed-by: NBob Picco <bob.picco@oracle.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2a20aa17
    • P
      x86/mm: set fields in deferred pages · 353b1e7b
      Pavel Tatashin 提交于
      Without deferred struct page feature (CONFIG_DEFERRED_STRUCT_PAGE_INIT),
      flags and other fields in "struct page"es are never changed prior to
      first initializing struct pages by going through __init_single_page().
      
      With deferred struct page feature enabled, however, we set fields in
      register_page_bootmem_info that are subsequently clobbered right after
      in free_all_bootmem:
      
              mem_init() {
                      register_page_bootmem_info();
                      free_all_bootmem();
                      ...
              }
      
      When register_page_bootmem_info() is called only non-deferred struct
      pages are initialized.  But, this function goes through some reserved
      pages which might be part of the deferred, and thus are not yet
      initialized.
      
        mem_init
         register_page_bootmem_info
          register_page_bootmem_info_node
           get_page_bootmem
            .. setting fields here ..
            such as: page->freelist = (void *)type;
      
        free_all_bootmem()
         free_low_memory_core_early()
          for_each_reserved_mem_region()
           reserve_bootmem_region()
            init_reserved_page() <- Only if this is deferred reserved page
             __init_single_pfn()
              __init_single_page()
                  memset(0) <-- Loose the set fields here
      
      We end up with issue where, currently we do not observe problem as
      memory is explicitly zeroed.  But, if flag asserts are changed we can
      start hitting issues.
      
      Also, because in this patch series we will stop zeroing struct page
      memory during allocation, we must make sure that struct pages are
      properly initialized prior to using them.
      
      The deferred-reserved pages are initialized in free_all_bootmem().
      Therefore, the fix is to switch the above calls.
      
      Link: http://lkml.kernel.org/r/20171013173214.27300-3-pasha.tatashin@oracle.comSigned-off-by: NPavel Tatashin <pasha.tatashin@oracle.com>
      Reviewed-by: NSteven Sistare <steven.sistare@oracle.com>
      Reviewed-by: NDaniel Jordan <daniel.m.jordan@oracle.com>
      Reviewed-by: NBob Picco <bob.picco@oracle.com>
      Tested-by: NBob Picco <bob.picco@oracle.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      353b1e7b
    • L
      kmemcheck: rip it out · 4675ff05
      Levin, Alexander (Sasha Levin) 提交于
      Fix up makefiles, remove references, and git rm kmemcheck.
      
      Link: http://lkml.kernel.org/r/20171007030159.22241-4-alexander.levin@verizon.comSigned-off-by: NSasha Levin <alexander.levin@verizon.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Vegard Nossum <vegardno@ifi.uio.no>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Tim Hansen <devtimhansen@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4675ff05
    • L
      kmemcheck: remove whats left of NOTRACK flags · d8be7566
      Levin, Alexander (Sasha Levin) 提交于
      Now that kmemcheck is gone, we don't need the NOTRACK flags.
      
      Link: http://lkml.kernel.org/r/20171007030159.22241-5-alexander.levin@verizon.comSigned-off-by: NSasha Levin <alexander.levin@verizon.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tim Hansen <devtimhansen@gmail.com>
      Cc: Vegard Nossum <vegardno@ifi.uio.no>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d8be7566
    • L
      kmemcheck: stop using GFP_NOTRACK and SLAB_NOTRACK · 75f296d9
      Levin, Alexander (Sasha Levin) 提交于
      Convert all allocations that used a NOTRACK flag to stop using it.
      
      Link: http://lkml.kernel.org/r/20171007030159.22241-3-alexander.levin@verizon.comSigned-off-by: NSasha Levin <alexander.levin@verizon.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tim Hansen <devtimhansen@gmail.com>
      Cc: Vegard Nossum <vegardno@ifi.uio.no>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      75f296d9
    • L
      kmemcheck: remove annotations · 49502766
      Levin, Alexander (Sasha Levin) 提交于
      Patch series "kmemcheck: kill kmemcheck", v2.
      
      As discussed at LSF/MM, kill kmemcheck.
      
      KASan is a replacement that is able to work without the limitation of
      kmemcheck (single CPU, slow).  KASan is already upstream.
      
      We are also not aware of any users of kmemcheck (or users who don't
      consider KASan as a suitable replacement).
      
      The only objection was that since KASAN wasn't supported by all GCC
      versions provided by distros at that time we should hold off for 2
      years, and try again.
      
      Now that 2 years have passed, and all distros provide gcc that supports
      KASAN, kill kmemcheck again for the very same reasons.
      
      This patch (of 4):
      
      Remove kmemcheck annotations, and calls to kmemcheck from the kernel.
      
      [alexander.levin@verizon.com: correctly remove kmemcheck call from dma_map_sg_attrs]
        Link: http://lkml.kernel.org/r/20171012192151.26531-1-alexander.levin@verizon.com
      Link: http://lkml.kernel.org/r/20171007030159.22241-2-alexander.levin@verizon.comSigned-off-by: NSasha Levin <alexander.levin@verizon.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Tim Hansen <devtimhansen@gmail.com>
      Cc: Vegard Nossum <vegardno@ifi.uio.no>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      49502766