1. 26 5月, 2010 2 次提交
    • L
      Revert "endian: #define __BYTE_ORDER" · 13da9e20
      Linus Torvalds 提交于
      This reverts commit b3b77c8c, which was
      also totally broken (see commit 0d2daf5c that reverted the crc32
      version of it).  As reported by Stephen Rothwell, it causes problems on
      big-endian machines:
      
      > In file included from fs/jfs/jfs_types.h:33,
      >                  from fs/jfs/jfs_incore.h:26,
      >                  from fs/jfs/file.c:22:
      > fs/jfs/endian24.h:36:101: warning: "__LITTLE_ENDIAN" is not defined
      
      The kernel has never had that crazy "__BYTE_ORDER == __LITTLE_ENDIAN"
      model.  It's not how we do things, and it isn't how we _should_ do
      things.  So don't go there.
      Requested-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      13da9e20
    • K
      driver core: add devname module aliases to allow module on-demand auto-loading · 578454ff
      Kay Sievers 提交于
      This adds:
        alias: devname:<name>
      to some common kernel modules, which will allow the on-demand loading
      of the kernel module when the device node is accessed.
      
      Ideally all these modules would be compiled-in, but distros seems too
      much in love with their modularization that we need to cover the common
      cases with this new facility. It will allow us to remove a bunch of pretty
      useless init scripts and modprobes from init scripts.
      
      The static device node aliases will be carried in the module itself. The
      program depmod will extract this information to a file in the module directory:
        $ cat /lib/modules/2.6.34-00650-g537b60d1-dirty/modules.devname
        # Device nodes to trigger on-demand module loading.
        microcode cpu/microcode c10:184
        fuse fuse c10:229
        ppp_generic ppp c108:0
        tun net/tun c10:200
        dm_mod mapper/control c10:235
      
      Udev will pick up the depmod created file on startup and create all the
      static device nodes which the kernel modules specify, so that these modules
      get automatically loaded when the device node is accessed:
        $ /sbin/udevd --debug
        ...
        static_dev_create_from_modules: mknod '/dev/cpu/microcode' c10:184
        static_dev_create_from_modules: mknod '/dev/fuse' c10:229
        static_dev_create_from_modules: mknod '/dev/ppp' c108:0
        static_dev_create_from_modules: mknod '/dev/net/tun' c10:200
        static_dev_create_from_modules: mknod '/dev/mapper/control' c10:235
        udev_rules_apply_static_dev_perms: chmod '/dev/net/tun' 0666
        udev_rules_apply_static_dev_perms: chmod '/dev/fuse' 0666
      
      A few device nodes are switched to statically allocated numbers, to allow
      the static nodes to work. This might also useful for systems which still run
      a plain static /dev, which is completely unsafe to use with any dynamic minor
      numbers.
      
      Note:
      The devname aliases must be limited to the *common* and *single*instance*
      device nodes, like the misc devices, and never be used for conceptually limited
      systems like the loop devices, which should rather get fixed properly and get a
      control node for losetup to talk to, instead of creating a random number of
      device nodes in advance, regardless if they are ever used.
      
      This facility is to hide the mess distros are creating with too modualized
      kernels, and just to hide that these modules are not compiled-in, and not to
      paper-over broken concepts. Thanks! :)
      
      Cc: Greg Kroah-Hartman <gregkh@suse.de>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Miklos Szeredi <miklos@szeredi.hu>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Alasdair G Kergon <agk@redhat.com>
      Cc: Tigran Aivazian <tigran@aivazian.fsnet.co.uk>
      Cc: Ian Kent <raven@themaw.net>
      Signed-Off-By: NKay Sievers <kay.sievers@vrfy.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      578454ff
  2. 25 5月, 2010 38 次提交
    • G
      fbdev: move FBIO_WAITFORVSYNC to linux/fb.h · 49c39b49
      Grazvydas Ignotas 提交于
      FBIO_WAITFORVSYNC is currently implemented by matroxfb, atyfb, intelfb and
      more.  All of them keep redefining the same FBIO_WAITFORVSYNC macro over
      and over again, so move it to linux/fb.h and clean up those duplicate
      defines.
      Signed-off-by: NGrazvydas Ignotas <notasas@gmail.com>
      Cc: Ville Syrjala <syrjala@sci.fi>
      Cc: Grant Likely <grant.likely@secretlab.ca>
      Cc: Maik Broemme <mbroemme@plusserver.de>
      Cc: Petr Vandrovec <vandrove@vc.cvut.cz>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Krzysztof Helt <krzysztof.h1@poczta.fm>
      Cc: "Hiremath, Vaibhav" <hvaibhav@ti.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      49c39b49
    • M
      fbdev: da8xx/omap-l1xx: implement double buffering · 1f9c3e1f
      Martin Ambrose 提交于
      This work includes the following:
      
      - Implement handler for FBIO_WAITFORVSYNC ioctl.
      
      - Allocate the data and palette buffers separately.  A consequence of
        this is that the palette and data loading is now done in different
        phases.  And that the LCD must be disabled temporarily after the palette
        is loaded but this will only happen once after init and each time the
        palette is changed.  I think this is OK.
      
      - Allocate two (ping and pong) framebuffers from memory.
      
      - Add pan_display handler which toggles the LCDC DMA registers between
        the ping and pong buffers.
      Signed-off-by: NMartin Ambrose <martin@ti.com>
      Cc: Chaithrika U S <chaithrika@ti.com>
      Cc: Sudhakar Rajashekhara <sudhakar.raj@ti.com>
      Cc: Krzysztof Helt <krzysztof.h1@poczta.fm>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1f9c3e1f
    • S
      lis3: interrupt handlers for 8bit wakeup and click events · 6d94d408
      Samu Onkalo 提交于
      Content for the 8bit device threaded interrupt handlers.  Depending on the
      interrupt line and chip configuration, either click or wakeup / freefall
      handler is called.  In case of click, BTN_ event is sent via input device.
       In case of wakeup or freefall, input device ABS_ events are updated
      immediatelly.
      
      It is still possible to configure interrupt line 1 for fast freefall
      detection and use the second line either for click or threshold based
      interrupts.  Or both lines can be used for click / threshold interrupts.
      
      Polled input device can be set to stopped state and still get coordinate
      updates via input device using interrupt based method.  Polled mode and
      interrupt mode can also be used parallel.
      
      BTN_ events are remapped based on existing axis remapping information.
      Signed-off-by: NSamu Onkalo <samu.p.onkalo@nokia.com>
      Acked-by: NEric Piel <eric.piel@tremplin-utc.net>
      Cc: Daniel Mack <daniel@caiaq.de>
      Cc: Pavel Machek <pavel@ucw.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6d94d408
    • S
      lis3: add skeletons for interrupt handlers · 92ba4fe4
      Samu Onkalo 提交于
      Original lis3 driver didn't provide interrupt handler(s) for click or
      threshold event handling.  This patch adds threaded handlers for one or
      two interrupt lines for 8 bit device.  Actual content for interrupt
      handling is provided in the separate patch.
      Signed-off-by: NSamu Onkalo <samu.p.onkalo@nokia.com>
      Tested-by: NDaniel Mack <daniel@caiaq.de>
      Acked-by: NEric Piel <eric.piel@tremplin-utc.net>
      Cc: Pavel Machek <pavel@ucw.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      92ba4fe4
    • S
      lis3: introduce platform data for second ff / wu unit · 342c5f12
      Samu Onkalo 提交于
      8 bit device has two wakeup / free fall units.  It was not possible to
      configure the second unit.  This patch introduces configuration entry to
      the platform data and also corresponding changes to the 8 bit setup
      function.
      
      High pass filters were enabled by default.  Patch introduces configuration
      option for high pass filter cut off frequency and also possibility to
      disable or enable the filter via platform data.  Since the control is a
      new one and default state was filter enabled, new option is used to
      disable the filter.  This way old platform data is still compatible with
      the change.
      Signed-off-by: NSamu Onkalo <samu.p.onkalo@nokia.com>
      Acked-by: NEric Piel <eric.piel@tremplin-utc.net>
      Tested-by: NDaniel Mack <daniel@caiaq.de>
      Cc: Pavel Machek <pavel@ucw.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      342c5f12
    • A
      lib: introduce common method to convert hex digits · 90378889
      Andy Shevchenko 提交于
      hex_to_bin() is a little method which converts hex digit to its actual
      value.  There are plenty of places where such functionality is needed.
      
      [akpm@linux-foundation.org: use tolower(), saving 3 bytes, test the more common case first - it's quicker]
      [akpm@linux-foundation.org: relocate tolower to make it even faster! (Joe)]
      Signed-off-by: NAndy Shevchenko <ext-andriy.shevchenko@nokia.com>
      Cc: Tilman Schmidt <tilman@imap.cc>
      Cc: Duncan Sands <duncan.sands@free.fr>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Greg Kroah-Hartman <gregkh@suse.de>
      Cc: "Richard Russon (FlatCap)" <ldm@flatcap.org>
      Cc: John W. Linville <linville@tuxdriver.com>
      Cc: Len Brown <lenb@kernel.org>
      Cc: Joe Perches <joe@perches.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      90378889
    • F
      DYNAMIC_DEBUG: fix documentation errors · 2b2f68b5
      Florian Ragwitz 提交于
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NFlorian Ragwitz <rafl@debian.org>
      Cc: Jason Baron <jbaron@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2b2f68b5
    • O
      ratelimit: add ratelimit_state_init() · f40c396a
      OGAWA Hirofumi 提交于
      For now, all users of ratelimit_state allocates it statically, so
      DEFINE_RATELIMIT_STATE() is enough.  But, I want to use ratelimit_state
      for fs, i.e.  per super_block to suppress too many error reports.
      
      So, this adds ratelimit_state_init() to initialize ratelimite_state
      which is dynamically allocated, instead of opencoding.
      Signed-off-by: NOGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f40c396a
    • O
      printk_ratelimited(): fix uninitialized spinlock · d8521fcc
      OGAWA Hirofumi 提交于
      ratelimit_state initialization of printk_ratelimited() seems broken.  This
      fixes it by using DEFINE_RATELIMIT_STATE() to initialize spinlock
      properly.
      Signed-off-by: NOGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
      Cc: Joe Perches <joe@perches.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d8521fcc
    • A
    • P
      asm-generic: don't warn that atomic_t is only 24 bit · 37682177
      Peter Fritzsche 提交于
      32-bit Sparc used to only allow usage of 24-bit of it's atomic_t type.
      This was corrected with linux 2.6.3 when Keith M Wesolowski changed the
      implementation to use the parisc approach of having an array of spinlocks
      to protect the atomic_t.
      
      These warnings were also removed from the sparc implementation when the
      new implementation was merged in BKrev:402e4949VThdc6D3iaosSFUgabMfvw, but
      the warning still remained in some other places without any 24-bit-only
      atomic_t implementation inside the kernel.
      
      We should remove these warnings to allow users to rely on the full 32-bit
      range of atomic_t.
      Signed-off-by: NPeter Fritzsche <peter.fritzsche@gmx.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      37682177
    • J
      kernel.h: add pr_warn for symmetry to dev_warn, netdev_warn · fc62f2f1
      Joe Perches 提交于
      The current logging macros are
      pr_<level>, dev_<level>, netdev_<level>, and netif_<level>.
      pr_ uses warning, the other use warn.
      
      Standardize these logging macros a bit more by adding pr_warn and
      pr_warn_ratelimited.
      
      Right now, there are:
      
      $ for level in emerg alert crit err warn warning notice info ; do \
          for prefix in pr dev netdev netif ; do \
            echo -n "${prefix}_${level}:	`git grep -w "${prefix}_${level}" | wc -l`	" ; \
          done ; \
          echo ; \
        done
      pr_emerg: 	45	dev_emerg: 	4	netdev_emerg: 	1	netif_emerg: 	4
      pr_alert: 	24	dev_alert: 	36	netdev_alert: 	1	netif_alert: 	6
      pr_crit: 	24	dev_crit: 	22	netdev_crit: 	1	netif_crit: 	4
      pr_err:  	2013	dev_err: 	8467	netdev_err: 	267	netif_err: 	240
      pr_warn: 	0	dev_warn: 	1818	netdev_warn: 	126	netif_warn: 	23
      pr_warning: 	773	dev_warning: 	0	netdev_warning:	0	netif_warning: 	0
      pr_notice: 	148	dev_notice: 	111	netdev_notice: 	9	netif_notice: 	3
      pr_info: 	1717	dev_info: 	3007	netdev_info: 	101	netif_info: 	85
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fc62f2f1
    • A
      kernel-wide: replace USHORT_MAX, SHORT_MAX and SHORT_MIN with USHRT_MAX, SHRT_MAX and SHRT_MIN · 4be929be
      Alexey Dobriyan 提交于
      - C99 knows about USHRT_MAX/SHRT_MAX/SHRT_MIN, not
        USHORT_MAX/SHORT_MAX/SHORT_MIN.
      
      - Make SHRT_MIN of type s16, not int, for consistency.
      
      [akpm@linux-foundation.org: fix drivers/dma/timb_dma.c]
      [akpm@linux-foundation.org: fix security/keys/keyring.c]
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Acked-by: NWANG Cong <xiyou.wangcong@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4be929be
    • J
      endian: #define __BYTE_ORDER · b3b77c8c
      Joakim Tjernlund 提交于
      Linux does not define __BYTE_ORDER in its endian header files which makes
      some header files bend backwards to get at the current endian.  Lets
      #define __BYTE_ORDER in big_endian.h/litte_endian.h to make it easier for
      header files that are used in user space too.
      
      In userspace the convention is that
      
        1. _both_ __LITTLE_ENDIAN and __BIG_ENDIAN are defined,
        2. you have to test for e.g. __BYTE_ORDER == __BIG_ENDIAN.
      Signed-off-by: NJoakim Tjernlund <Joakim.Tjernlund@transmode.se>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b3b77c8c
    • J
      err.h: add __must_check to error pointer handlers · e47103b1
      Jani Nikula 提交于
      Add __must_check to error pointer handlers to have the compiler warn about
      mistakes like:
      
      	if (err)
      		ERR_PTR(err);
      
      It found two bugs:
      
      Mar 12 Nikula Jani [PATCH] enclosure: fix error path - actually return ERR_PTR() on error
      Mar 12 Nikula Jani [PATCH] sunrpc: fix error path - actually return ERR_PTR() on error
      Signed-off-by: NJani Nikula <ext-jani.1.nikula@nokia.com>
      Cc: Phil Carmody <ext-phil.2.carmody@nokia.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e47103b1
    • C
      mm: make lowmem_page_address() use PFN_PHYS() for improved portability · c6f6b596
      Chris Metcalf 提交于
      This ensures that platforms with lowmem PAs above 32 bits work correctly
      by avoiding truncating the PA during a left shift.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      Cc: Barry Song <21cnbao@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c6f6b596
    • H
      mem-hotplug: fix potential race while building zonelist for new populated zone · 4eaf3f64
      Haicheng Li 提交于
      Add global mutex zonelists_mutex to fix the possible race:
      
           CPU0                                  CPU1                    CPU2
      (1) zone->present_pages += online_pages;
      (2)                                       build_all_zonelists();
      (3)                                                               alloc_page();
      (4)                                                               free_page();
      (5) build_all_zonelists();
      (6)   __build_all_zonelists();
      (7)     zone->pageset = alloc_percpu();
      
      In step (3,4), zone->pageset still points to boot_pageset, so bad
      things may happen if 2+ nodes are in this state. Even if only 1 node
      is accessing the boot_pageset, (3) may still consume too much memory
      to fail the memory allocations in step (7).
      
      Besides, atomic operation ensures alloc_percpu() in step (7) will never fail
      since there is a new fresh memory block added in step(6).
      
      [haicheng.li@linux.intel.com: hold zonelists_mutex when build_all_zonelists]
      Signed-off-by: NHaicheng Li <haicheng.li@linux.intel.com>
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Reviewed-by: NAndi Kleen <andi.kleen@intel.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4eaf3f64
    • H
      mem-hotplug: avoid multiple zones sharing same boot strapping boot_pageset · 1f522509
      Haicheng Li 提交于
      For each new populated zone of hotadded node, need to update its pagesets
      with dynamically allocated per_cpu_pageset struct for all possible CPUs:
      
          1) Detach zone->pageset from the shared boot_pageset
             at end of __build_all_zonelists().
      
          2) Use mutex to protect zone->pageset when it's still
             shared in onlined_pages()
      
      Otherwises, multiple zones of different nodes would share same boot strapping
      boot_pageset for same CPU, which will finally cause below kernel panic:
      
        ------------[ cut here ]------------
        kernel BUG at mm/page_alloc.c:1239!
        invalid opcode: 0000 [#1] SMP
        ...
        Call Trace:
         [<ffffffff811300c1>] __alloc_pages_nodemask+0x131/0x7b0
         [<ffffffff81162e67>] alloc_pages_current+0x87/0xd0
         [<ffffffff81128407>] __page_cache_alloc+0x67/0x70
         [<ffffffff811325f0>] __do_page_cache_readahead+0x120/0x260
         [<ffffffff81132751>] ra_submit+0x21/0x30
         [<ffffffff811329c6>] ondemand_readahead+0x166/0x2c0
         [<ffffffff81132ba0>] page_cache_async_readahead+0x80/0xa0
         [<ffffffff8112a0e4>] generic_file_aio_read+0x364/0x670
         [<ffffffff81266cfa>] nfs_file_read+0xca/0x130
         [<ffffffff8117b20a>] do_sync_read+0xfa/0x140
         [<ffffffff8117bf75>] vfs_read+0xb5/0x1a0
         [<ffffffff8117c151>] sys_read+0x51/0x80
         [<ffffffff8103c032>] system_call_fastpath+0x16/0x1b
        RIP  [<ffffffff8112ff13>] get_page_from_freelist+0x883/0x900
         RSP <ffff88000d1e78a8>
        ---[ end trace 4bda28328b9990db ]
      
      [akpm@linux-foundation.org: merge fix]
      Signed-off-by: NHaicheng Li <haicheng.li@linux.intel.com>
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Reviewed-by: NAndi Kleen <andi.kleen@intel.com>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1f522509
    • M
      mm: fix NR_SECTION_ROOTS == 0 when using using sparsemem extreme. · 0faa5638
      Marcelo Roberto Jimenez 提交于
      Got this while compiling for ARM/SA1100:
      
      mm/sparse.c: In function '__section_nr':
      mm/sparse.c:135: warning: 'root' is used uninitialized in this function
      
      This patch follows Russell King's suggestion for a new calculation for
      NR_SECTION_ROOTS.  Thanks also to Sergei Shtylyov for pointing out the
      existence of the macro DIV_ROUND_UP.
      
      Atsushi Nemoto observed:
      : This fix doesn't just silence the warning - it fixes a real problem.
      :
      : Without this fix, mem_section[] might have 0 size so mem_section[0]
      : will share other variable area.  For example, I got:
      :
      : c030c700 b __warned.16478
      : c030c700 B mem_section
      : c030c701 b __warned.16483
      :
      : This might cause very strange behavior.  Your patch actually fixes it.
      Signed-off-by: NMarcelo Roberto Jimenez <mroberto@cpti.cetuc.puc-rio.br>
      Cc: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Sergei Shtylyov <sshtylyov@mvista.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0faa5638
    • A
      highmem: remove unneeded #ifdef CONFIG_TRACE_IRQFLAGS_SUPPORT for debug_kmap_atomic() · ff3d58c2
      Akinobu Mita 提交于
      In f4112de6 ("mm: introduce
      debug_kmap_atomic") I said that debug_kmap_atomic() needs
      CONFIG_TRACE_IRQFLAGS_SUPPORT.
      
      It was wrong.  (I thought irqs_disabled() is only available when the
      architecture has CONFIG_TRACE_IRQFLAGS_SUPPORT)
      
      Remove the #ifdef CONFIG_TRACE_IRQFLAGS_SUPPORT check to enable
      kmap_atomic() debugging for the architectures which do not have
      CONFIG_TRACE_IRQFLAGS_SUPPORT.
      Reported-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ff3d58c2
    • M
      include/linux/gfp.h: fix coding style · fd23855e
      matt mooney 提交于
      Add parenthesis in a define.  This doesn't change functionality.
      
      checkpatch errors:
      1) white space fixes
      2) add spaces after comas
      Signed-off-by: Nmatt mooney <mfm@muteddisk.com>
      Cc: Dan Carpenter <error27@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fd23855e
    • M
      include/linux/gfp.h: spelling fixes · 263ff5d8
      matt mooney 提交于
      Fix minor spelling errors in a few comments; no code changes.
      Signed-off-by: Nmatt mooney <mfm@muteddisk.com>
      Cc: Dan Carpenter <error27@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      263ff5d8
    • M
      cpu/mem hotplug: enable CPUs online before local memory online · cf23422b
      minskey guo 提交于
      Enable users to online CPUs even if the CPUs belongs to a numa node which
      doesn't have onlined local memory.
      
      The zonlists(pg_data_t.node_zonelists[]) of a numa node are created either
      in system boot/init period, or at the time of local memory online.  For a
      numa node without onlined local memory, its zonelists are not initialized
      at present.  As a result, any memory allocation operations executed by
      CPUs within this node will fail.  In fact, an out-of-memory error is
      triggered when attempt to online CPUs before memory comes to online.
      
      This patch tries to create zonelists for such numa nodes, so that the
      memory allocation for this node can be fallback'ed to other nodes.
      
      [akpm@linux-foundation.org: remove unneeded export]
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: minskey guo<chaohong.guo@intel.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cf23422b
    • J
      vmscan: remove isolate_pages callback scan control · 8b25c6d2
      Johannes Weiner 提交于
      For now, we have global isolation vs.  memory control group isolation, do
      not allow the reclaim entry function to set an arbitrary page isolation
      callback, we do not need that flexibility.
      
      And since we already pass around the group descriptor for the memory
      control group isolation case, just use it to decide which one of the two
      isolator functions to use.
      
      The decisions can be merged into nearby branches, so no extra cost there.
      In fact, we save the indirect calls.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8b25c6d2
    • M
      mm: compaction: defer compaction using an exponential backoff when compaction fails · 4f92e258
      Mel Gorman 提交于
      The fragmentation index may indicate that a failure is due to external
      fragmentation but after a compaction run completes, it is still possible
      for an allocation to fail.  There are two obvious reasons as to why
      
        o Page migration cannot move all pages so fragmentation remains
        o A suitable page may exist but watermarks are not met
      
      In the event of compaction followed by an allocation failure, this patch
      defers further compaction in the zone (1 << compact_defer_shift) times.
      If the next compaction attempt also fails, compact_defer_shift is
      increased up to a maximum of 6.  If compaction succeeds, the defer
      counters are reset again.
      
      The zone that is deferred is the first zone in the zonelist - i.e.  the
      preferred zone.  To defer compaction in the other zones, the information
      would need to be stored in the zonelist or implemented similar to the
      zonelist_cache.  This would impact the fast-paths and is not justified at
      this time.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4f92e258
    • M
      mm: compaction: add a tunable that decides when memory should be compacted and... · 5e771905
      Mel Gorman 提交于
      mm: compaction: add a tunable that decides when memory should be compacted and when it should be reclaimed
      
      The kernel applies some heuristics when deciding if memory should be
      compacted or reclaimed to satisfy a high-order allocation.  One of these
      is based on the fragmentation.  If the index is below 500, memory will not
      be compacted.  This choice is arbitrary and not based on data.  To help
      optimise the system and set a sensible default for this value, this patch
      adds a sysctl extfrag_threshold.  The kernel will only compact memory if
      the fragmentation index is above the extfrag_threshold.
      
      [randy.dunlap@oracle.com: Fix build errors when proc fs is not configured]
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5e771905
    • M
      mm: compaction: direct compact when a high-order allocation fails · 56de7263
      Mel Gorman 提交于
      Ordinarily when a high-order allocation fails, direct reclaim is entered
      to free pages to satisfy the allocation.  With this patch, it is
      determined if an allocation failed due to external fragmentation instead
      of low memory and if so, the calling process will compact until a suitable
      page is freed.  Compaction by moving pages in memory is considerably
      cheaper than paging out to disk and works where there are locked pages or
      no swap.  If compaction fails to free a page of a suitable size, then
      reclaim will still occur.
      
      Direct compaction returns as soon as possible.  As each block is
      compacted, it is checked if a suitable page has been freed and if so, it
      returns.
      
      [akpm@linux-foundation.org: Fix build errors]
      [aarcange@redhat.com: fix count_vm_event preempt in memory compaction direct reclaim]
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NMinchan Kim <minchan.kim@gmail.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      56de7263
    • M
      mm: compaction: add /sys trigger for per-node memory compaction · ed4a6d7f
      Mel Gorman 提交于
      Add a per-node sysfs file called compact.  When the file is written to,
      each zone in that node is compacted.  The intention that this would be
      used by something like a job scheduler in a batch system before a job
      starts so that the job can allocate the maximum number of hugepages
      without significant start-up cost.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Reviewed-by: NMinchan Kim <minchan.kim@gmail.com>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ed4a6d7f
    • M
      mm: compaction: add /proc trigger for memory compaction · 76ab0f53
      Mel Gorman 提交于
      Add a proc file /proc/sys/vm/compact_memory.  When an arbitrary value is
      written to the file, all zones are compacted.  The expected user of such a
      trigger is a job scheduler that prepares the system before the target
      application runs.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Reviewed-by: NMinchan Kim <minchan.kim@gmail.com>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      76ab0f53
    • M
      mm: compaction: memory compaction core · 748446bb
      Mel Gorman 提交于
      This patch is the core of a mechanism which compacts memory in a zone by
      relocating movable pages towards the end of the zone.
      
      A single compaction run involves a migration scanner and a free scanner.
      Both scanners operate on pageblock-sized areas in the zone.  The migration
      scanner starts at the bottom of the zone and searches for all movable
      pages within each area, isolating them onto a private list called
      migratelist.  The free scanner starts at the top of the zone and searches
      for suitable areas and consumes the free pages within making them
      available for the migration scanner.  The pages isolated for migration are
      then migrated to the newly isolated free pages.
      
      [aarcange@redhat.com: Fix unsafe optimisation]
      [mel@csn.ul.ie: do not schedule work on other CPUs for compaction]
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NMinchan Kim <minchan.kim@gmail.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      748446bb
    • M
      mm: move definition for LRU isolation modes to a header · c175a0ce
      Mel Gorman 提交于
      Currently, vmscan.c defines the isolation modes for __isolate_lru_page().
      Memory compaction needs access to these modes for isolating pages for
      migration.  This patch exports them.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c175a0ce
    • M
      mm: migration: avoid race between shift_arg_pages() and rmap_walk() during... · a8bef8ff
      Mel Gorman 提交于
      mm: migration: avoid race between shift_arg_pages() and rmap_walk() during migration by not migrating temporary stacks
      
      Page migration requires rmap to be able to find all ptes mapping a page
      at all times, otherwise the migration entry can be instantiated, but it
      is possible to leave one behind if the second rmap_walk fails to find
      the page.  If this page is later faulted, migration_entry_to_page() will
      call BUG because the page is locked indicating the page was migrated by
      the migration PTE not cleaned up. For example
      
        kernel BUG at include/linux/swapops.h:105!
        invalid opcode: 0000 [#1] PREEMPT SMP
        ...
        Call Trace:
         [<ffffffff810e951a>] handle_mm_fault+0x3f8/0x76a
         [<ffffffff8130c7a2>] do_page_fault+0x44a/0x46e
         [<ffffffff813099b5>] page_fault+0x25/0x30
         [<ffffffff8114de33>] load_elf_binary+0x152a/0x192b
         [<ffffffff8111329b>] search_binary_handler+0x173/0x313
         [<ffffffff81114896>] do_execve+0x219/0x30a
         [<ffffffff8100a5c6>] sys_execve+0x43/0x5e
         [<ffffffff8100320a>] stub_execve+0x6a/0xc0
        RIP  [<ffffffff811094ff>] migration_entry_wait+0xc1/0x129
      
      There is a race between shift_arg_pages and migration that triggers this
      bug.  A temporary stack is setup during exec and later moved.  If
      migration moves a page in the temporary stack and the VMA is then removed
      before migration completes, the migration PTE may not be found leading to
      a BUG when the stack is faulted.
      
      This patch causes pages within the temporary stack during exec to be
      skipped by migration.  It does this by marking the VMA covering the
      temporary stack with an otherwise impossible combination of VMA flags.
      These flags are cleared when the temporary stack is moved to its final
      location.
      
      [kamezawa.hiroyu@jp.fujitsu.com: idea for having migration skip temporary stacks]
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a8bef8ff
    • M
      mm: migration: share the anon_vma ref counts between KSM and page migration · 7f60c214
      Mel Gorman 提交于
      For clarity of review, KSM and page migration have separate refcounts on
      the anon_vma.  While clear, this is a waste of memory.  This patch gets
      KSM and page migration to share their toys in a spirit of harmony.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Reviewed-by: NMinchan Kim <minchan.kim@gmail.com>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7f60c214
    • M
      mm: migration: take a reference to the anon_vma before migrating · 3f6c8272
      Mel Gorman 提交于
      This patchset is a memory compaction mechanism that reduces external
      fragmentation memory by moving GFP_MOVABLE pages to a fewer number of
      pageblocks.  The term "compaction" was chosen as there are is a number of
      mechanisms that are not mutually exclusive that can be used to defragment
      memory.  For example, lumpy reclaim is a form of defragmentation as was
      slub "defragmentation" (really a form of targeted reclaim).  Hence, this
      is called "compaction" to distinguish it from other forms of
      defragmentation.
      
      In this implementation, a full compaction run involves two scanners
      operating within a zone - a migration and a free scanner.  The migration
      scanner starts at the beginning of a zone and finds all movable pages
      within one pageblock_nr_pages-sized area and isolates them on a
      migratepages list.  The free scanner begins at the end of the zone and
      searches on a per-area basis for enough free pages to migrate all the
      pages on the migratepages list.  As each area is respectively migrated or
      exhausted of free pages, the scanners are advanced one area.  A compaction
      run completes within a zone when the two scanners meet.
      
      This method is a bit primitive but is easy to understand and greater
      sophistication would require maintenance of counters on a per-pageblock
      basis.  This would have a big impact on allocator fast-paths to improve
      compaction which is a poor trade-off.
      
      It also does not try relocate virtually contiguous pages to be physically
      contiguous.  However, assuming transparent hugepages were in use, a
      hypothetical khugepaged might reuse compaction code to isolate free pages,
      split them and relocate userspace pages for promotion.
      
      Memory compaction can be triggered in one of three ways.  It may be
      triggered explicitly by writing any value to /proc/sys/vm/compact_memory
      and compacting all of memory.  It can be triggered on a per-node basis by
      writing any value to /sys/devices/system/node/nodeN/compact where N is the
      node ID to be compacted.  When a process fails to allocate a high-order
      page, it may compact memory in an attempt to satisfy the allocation
      instead of entering direct reclaim.  Explicit compaction does not finish
      until the two scanners meet and direct compaction ends if a suitable page
      becomes available that would meet watermarks.
      
      The series is in 14 patches.  The first three are not "core" to the series
      but are important pre-requisites.
      
      Patch 1 reference counts anon_vma for rmap_walk_anon(). Without this
      	patch, it's possible to use anon_vma after free if the caller is
      	not holding a VMA or mmap_sem for the pages in question. While
      	there should be no existing user that causes this problem,
      	it's a requirement for memory compaction to be stable. The patch
      	is at the start of the series for bisection reasons.
      Patch 2 merges the KSM and migrate counts. It could be merged with patch 1
      	but would be slightly harder to review.
      Patch 3 skips over unmapped anon pages during migration as there are no
      	guarantees about the anon_vma existing. There is a window between
      	when a page was isolated and migration started during which anon_vma
      	could disappear.
      Patch 4 notes that PageSwapCache pages can still be migrated even if they
      	are unmapped.
      Patch 5 allows CONFIG_MIGRATION to be set without CONFIG_NUMA
      Patch 6 exports a "unusable free space index" via debugfs. It's
      	a measure of external fragmentation that takes the size of the
      	allocation request into account. It can also be calculated from
      	userspace so can be dropped if requested
      Patch 7 exports a "fragmentation index" which only has meaning when an
      	allocation request fails. It determines if an allocation failure
      	would be due to a lack of memory or external fragmentation.
      Patch 8 moves the definition for LRU isolation modes for use by compaction
      Patch 9 is the compaction mechanism although it's unreachable at this point
      Patch 10 adds a means of compacting all of memory with a proc trgger
      Patch 11 adds a means of compacting a specific node with a sysfs trigger
      Patch 12 adds "direct compaction" before "direct reclaim" if it is
      	determined there is a good chance of success.
      Patch 13 adds a sysctl that allows tuning of the threshold at which the
      	kernel will compact or direct reclaim
      Patch 14 temporarily disables compaction if an allocation failure occurs
      	after compaction.
      
      Testing of compaction was in three stages.  For the test, debugging,
      preempt, the sleep watchdog and lockdep were all enabled but nothing nasty
      popped out.  min_free_kbytes was tuned as recommended by hugeadm to help
      fragmentation avoidance and high-order allocations.  It was tested on X86,
      X86-64 and PPC64.
      
      Ths first test represents one of the easiest cases that can be faced for
      lumpy reclaim or memory compaction.
      
      1. Machine freshly booted and configured for hugepage usage with
      	a) hugeadm --create-global-mounts
      	b) hugeadm --pool-pages-max DEFAULT:8G
      	c) hugeadm --set-recommended-min_free_kbytes
      	d) hugeadm --set-recommended-shmmax
      
      	The min_free_kbytes here is important. Anti-fragmentation works best
      	when pageblocks don't mix. hugeadm knows how to calculate a value that
      	will significantly reduce the worst of external-fragmentation-related
      	events as reported by the mm_page_alloc_extfrag tracepoint.
      
      2. Load up memory
      	a) Start updatedb
      	b) Create in parallel a X files of pagesize*128 in size. Wait
      	   until files are created. By parallel, I mean that 4096 instances
      	   of dd were launched, one after the other using &. The crude
      	   objective being to mix filesystem metadata allocations with
      	   the buffer cache.
      	c) Delete every second file so that pageblocks are likely to
      	   have holes
      	d) kill updatedb if it's still running
      
      	At this point, the system is quiet, memory is full but it's full with
      	clean filesystem metadata and clean buffer cache that is unmapped.
      	This is readily migrated or discarded so you'd expect lumpy reclaim
      	to have no significant advantage over compaction but this is at
      	the POC stage.
      
      3. In increments, attempt to allocate 5% of memory as hugepages.
      	   Measure how long it took, how successful it was, how many
      	   direct reclaims took place and how how many compactions. Note
      	   the compaction figures might not fully add up as compactions
      	   can take place for orders other than the hugepage size
      
      X86				vanilla		compaction
      Final page count                    913                916 (attempted 1002)
      pages reclaimed                   68296               9791
      
      X86-64				vanilla		compaction
      Final page count:                   901                902 (attempted 1002)
      Total pages reclaimed:           112599              53234
      
      PPC64				vanilla		compaction
      Final page count:                    93                 94 (attempted 110)
      Total pages reclaimed:           103216              61838
      
      There was not a dramatic improvement in success rates but it wouldn't be
      expected in this case either.  What was important is that fewer pages were
      reclaimed in all cases reducing the amount of IO required to satisfy a
      huge page allocation.
      
      The second tests were all performance related - kernbench, netperf, iozone
      and sysbench.  None showed anything too remarkable.
      
      The last test was a high-order allocation stress test.  Many kernel
      compiles are started to fill memory with a pressured mix of unmovable and
      movable allocations.  During this, an attempt is made to allocate 90% of
      memory as huge pages - one at a time with small delays between attempts to
      avoid flooding the IO queue.
      
                                                   vanilla   compaction
      Percentage of request allocated X86               98           99
      Percentage of request allocated X86-64            95           98
      Percentage of request allocated PPC64             55           70
      
      This patch:
      
      rmap_walk_anon() does not use page_lock_anon_vma() for looking up and
      locking an anon_vma and it does not appear to have sufficient locking to
      ensure the anon_vma does not disappear from under it.
      
      This patch copies an approach used by KSM to take a reference on the
      anon_vma while pages are being migrated.  This should prevent rmap_walk()
      running into nasty surprises later because anon_vma has been freed.
      Signed-off-by: NMel Gorman <mel@csn.ul.ie>
      Acked-by: NRik van Riel <riel@redhat.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3f6c8272
    • M
      cpuset,mm: fix no node to alloc memory when changing cpuset's mems · c0ff7453
      Miao Xie 提交于
      Before applying this patch, cpuset updates task->mems_allowed and
      mempolicy by setting all new bits in the nodemask first, and clearing all
      old unallowed bits later.  But in the way, the allocator may find that
      there is no node to alloc memory.
      
      The reason is that cpuset rebinds the task's mempolicy, it cleans the
      nodes which the allocater can alloc pages on, for example:
      
      (mpol: mempolicy)
      	task1			task1's mpol	task2
      	alloc page		1
      	  alloc on node0? NO	1
      				1		change mems from 1 to 0
      				1		rebind task1's mpol
      				0-1		  set new bits
      				0	  	  clear disallowed bits
      	  alloc on node1? NO	0
      	  ...
      	can't alloc page
      	  goto oom
      
      This patch fixes this problem by expanding the nodes range first(set newly
      allowed bits) and shrink it lazily(clear newly disallowed bits).  So we
      use a variable to tell the write-side task that read-side task is reading
      nodemask, and the write-side task clears newly disallowed nodes after
      read-side task ends the current memory allocation.
      
      [akpm@linux-foundation.org: fix spello]
      Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Paul Menage <menage@google.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Ravikiran Thirumalai <kiran@scalex86.org>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c0ff7453
    • M
      mempolicy: restructure rebinding-mempolicy functions · 708c1bbc
      Miao Xie 提交于
      Nick Piggin reported that the allocator may see an empty nodemask when
      changing cpuset's mems[1].  It happens only on the kernel that do not do
      atomic nodemask_t stores.  (MAX_NUMNODES > BITS_PER_LONG)
      
      But I found that there is also a problem on the kernel that can do atomic
      nodemask_t stores.  The problem is that the allocator can't find a node to
      alloc page when changing cpuset's mems though there is a lot of free
      memory.  The reason is like this:
      
      (mpol: mempolicy)
      	task1			task1's mpol	task2
      	alloc page		1
      	  alloc on node0? NO	1
      				1		change mems from 1 to 0
      				1		rebind task1's mpol
      				0-1		  set new bits
      				0	  	  clear disallowed bits
      	  alloc on node1? NO	0
      	  ...
      	can't alloc page
      	  goto oom
      
      I can use the attached program reproduce it by the following step:
      
      # mkdir /dev/cpuset
      # mount -t cpuset cpuset /dev/cpuset
      # mkdir /dev/cpuset/1
      # echo `cat /dev/cpuset/cpus` > /dev/cpuset/1/cpus
      # echo `cat /dev/cpuset/mems` > /dev/cpuset/1/mems
      # echo $$ > /dev/cpuset/1/tasks
      # numactl --membind=`cat /dev/cpuset/mems` ./cpuset_mem_hog <nr_tasks> &
         <nr_tasks> = max(nr_cpus - 1, 1)
      # killall -s SIGUSR1 cpuset_mem_hog
      # ./change_mems.sh
      
      several hours later, oom will happen though there is a lot of free memory.
      
      This patchset fixes this problem by expanding the nodes range first(set
      newly allowed bits) and shrink it lazily(clear newly disallowed bits).  So
      we use a variable to tell the write-side task that read-side task is
      reading nodemask, and the write-side task clears newly disallowed nodes
      after read-side task ends the current memory allocation.
      
      This patch:
      
      In order to fix no node to alloc memory, when we want to update mempolicy
      and mems_allowed, we expand the set of nodes first (set all the newly
      nodes) and shrink the set of nodes lazily(clean disallowed nodes), But the
      mempolicy's rebind functions may breaks the expanding.
      
      So we restructure the mempolicy's rebind functions and split the rebind
      work to two steps, just like the update of cpuset's mems: The 1st step:
      expand the set of the mempolicy's nodes.  The 2nd step: shrink the set of
      the mempolicy's nodes.  It is used when there is no real lock to protect
      the mempolicy in the read-side.  Otherwise we can do rebind work at once.
      
      In order to implement it, we define
      
      	enum mpol_rebind_step {
      		MPOL_REBIND_ONCE,
      		MPOL_REBIND_STEP1,
      		MPOL_REBIND_STEP2,
      		MPOL_REBIND_NSTEP,
      	};
      
      If the mempolicy needn't be updated by two steps, we can pass
      MPOL_REBIND_ONCE to the rebind functions.  Or we can pass
      MPOL_REBIND_STEP1 to do the first step of the rebind work and pass
      MPOL_REBIND_STEP2 to do the second step work.
      
      Besides that, it maybe long time between these two step and we have to
      release the lock that protects mempolicy and mems_allowed.  If we hold the
      lock once again, we must check whether the current mempolicy is under the
      rebinding (the first step has been done) or not, because the task may
      alloc a new mempolicy when we don't hold the lock.  So we defined the
      following flag to identify it:
      
      #define MPOL_F_REBINDING (1 << 2)
      
      The new functions will be used in the next patch.
      Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Paul Menage <menage@google.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      Cc: Ravikiran Thirumalai <kiran@scalex86.org>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      708c1bbc
    • M
      mm: remove return value of putback_lru_pages() · e13861d8
      Minchan Kim 提交于
      putback_lru_page() never can fail.  So it doesn't matter count of "the
      number of pages put back".
      
      In addition, users of this functions don't use return value.
      
      Let's remove unnecessary code.
      Signed-off-by: NMinchan Kim <minchan.kim@gmail.com>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e13861d8
    • K
      tmpfs: insert tmpfs cache pages to inactive list at first · e9d6c157
      KOSAKI Motohiro 提交于
      Shaohua Li reported parallel file copy on tmpfs can lead to OOM killer.
      This is regression of caused by commit 9ff473b9 ("vmscan: evict
      streaming IO first").  Wow, It is 2 years old patch!
      
      Currently, tmpfs file cache is inserted active list at first.  This means
      that the insertion doesn't only increase numbers of pages in anon LRU, but
      it also reduces anon scanning ratio.  Therefore, vmscan will get totally
      confused.  It scans almost only file LRU even though the system has plenty
      unused tmpfs pages.
      
      Historically, lru_cache_add_active_anon() was used for two reasons.
      1) Intend to priotize shmem page rather than regular file cache.
      2) Intend to avoid reclaim priority inversion of used once pages.
      
      But we've lost both motivation because (1) Now we have separate anon and
      file LRU list.  then, to insert active list doesn't help such priotize.
      (2) In past, one pte access bit will cause page activation.  then to
      insert inactive list with pte access bit mean higher priority than to
      insert active list.  Its priority inversion may lead to uninteded lru
      chun.  but it was already solved by commit 64574746 (vmscan: detect
      mapped file pages used only once).  (Thanks Hannes, you are great!)
      
      Thus, now we can use lru_cache_add_anon() instead.
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reported-by: NShaohua Li <shaohua.li@intel.com>
      Reviewed-by: NWu Fengguang <fengguang.wu@intel.com>
      Reviewed-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Reviewed-by: NMinchan Kim <minchan.kim@gmail.com>
      Acked-by: NHugh Dickins <hughd@google.com>
      Cc: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e9d6c157