1. 31 3月, 2014 1 次提交
  2. 29 3月, 2014 3 次提交
  3. 28 3月, 2014 2 次提交
  4. 25 3月, 2014 1 次提交
  5. 21 3月, 2014 4 次提交
    • H
      mm: fix swapops.h:131 bug if remap_file_pages raced migration · 7e09e738
      Hugh Dickins 提交于
      Add remove_linear_migration_ptes_from_nonlinear(), to fix an interesting
      little include/linux/swapops.h:131 BUG_ON(!PageLocked) found by trinity:
      indicating that remove_migration_ptes() failed to find one of the
      migration entries that was temporarily inserted.
      
      The problem comes from remap_file_pages()'s switch from vma_interval_tree
      (good for inserting the migration entry) to i_mmap_nonlinear list (no good
      for locating it again); but can only be a problem if the remap_file_pages()
      range does not cover the whole of the vma (zap_pte() clears the range).
      
      remove_migration_ptes() needs a file_nonlinear method to go down the
      i_mmap_nonlinear list, applying linear location to look for migration
      entries in those vmas too, just in case there was this race.
      
      The file_nonlinear method does need rmap_walk_control.arg to do this;
      but it never needed vma passed in - vma comes from its own iteration.
      Reported-and-tested-by: NDave Jones <davej@redhat.com>
      Reported-and-tested-by: NSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7e09e738
    • P
      rcu: Provide grace-period piggybacking API · 765a3f4f
      Paul E. McKenney 提交于
      The following pattern is currently not well supported by RCU:
      
      1.	Make data element inaccessible to RCU readers.
      
      2.	Do work that probably lasts for more than one grace period.
      
      3.	Do something to make sure RCU readers in flight before #1 above
      	have completed.
      
      Here are some things that could currently be done:
      
      a.	Do a synchronize_rcu() unconditionally at either #1 or #3 above.
      	This works, but imposes needless work and latency.
      
      b.	Post an RCU callback at #1 above that does a wakeup, then
      	wait for the wakeup at #3.  This works well, but likely results
      	in an extra unneeded grace period.  Open-coding this is also
      	a bit more semi-tricky code than would be good.
      
      This commit therefore adds get_state_synchronize_rcu() and
      cond_synchronize_rcu() APIs.  Call get_state_synchronize_rcu() at #1
      above and pass its return value to cond_synchronize_rcu() at #3 above.
      This results in a call to synchronize_rcu() if no grace period has
      elapsed between #1 and #3, but requires only a load, comparison, and
      memory barrier if a full grace period did elapse.
      Requested-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      765a3f4f
    • D
      Rename TAINT_UNSAFE_SMP to TAINT_CPU_OUT_OF_SPEC · 8c90487c
      Dave Jones 提交于
      Rename TAINT_UNSAFE_SMP to TAINT_CPU_OUT_OF_SPEC, so we can repurpose
      the flag to encompass a wider range of pushing the CPU beyond its
      warrany.
      Signed-off-by: NDave Jones <davej@fedoraproject.org>
      Link: http://lkml.kernel.org/r/20140226154949.GA770@redhat.comSigned-off-by: NH. Peter Anvin <hpa@zytor.com>
      8c90487c
    • V
      tracing: Fix array size mismatch in format string · 87291347
      Vaibhav Nagarnaik 提交于
      In event format strings, the array size is reported in two locations.
      One in array subscript and then via the "size:" attribute. The values
      reported there have a mismatch.
      
      For e.g., in sched:sched_switch the prev_comm and next_comm character
      arrays have subscript values as [32] where as the actual field size is
      16.
      
      name: sched_switch
      ID: 301
      format:
              field:unsigned short common_type;       offset:0;       size:2; signed:0;
              field:unsigned char common_flags;       offset:2;       size:1; signed:0;
              field:unsigned char common_preempt_count;       offset:3;       size:1;signed:0;
              field:int common_pid;   offset:4;       size:4; signed:1;
      
              field:char prev_comm[32];       offset:8;       size:16;        signed:1;
              field:pid_t prev_pid;   offset:24;      size:4; signed:1;
              field:int prev_prio;    offset:28;      size:4; signed:1;
              field:long prev_state;  offset:32;      size:8; signed:1;
              field:char next_comm[32];       offset:40;      size:16;        signed:1;
              field:pid_t next_pid;   offset:56;      size:4; signed:1;
              field:int next_prio;    offset:60;      size:4; signed:1;
      
      After bisection, the following commit was blamed:
      92edca07 tracing: Use direct field, type and system names
      
      This commit removes the duplication of strings for field->name and
      field->type assuming that all the strings passed in
      __trace_define_field() are immutable. This is not true for arrays, where
      the type string is created in event_storage variable and field->type for
      all array fields points to event_storage.
      
      Use __stringify() to create a string constant for the type string.
      
      Also, get rid of event_storage and event_storage_mutex that are not
      needed anymore.
      
      also, an added benefit is that this reduces the overhead of events a bit more:
      
         text    data     bss     dec     hex filename
      8424787 2036472 1302528 11763787         b3804b vmlinux
      8420814 2036408 1302528 11759750         b37086 vmlinux.patched
      
      Link: http://lkml.kernel.org/r/1392349908-29685-1-git-send-email-vnagarnaik@google.com
      
      Cc: Laurent Chavey <chavey@google.com>
      Cc: stable@vger.kernel.org # 3.10+
      Signed-off-by: NVaibhav Nagarnaik <vnagarnaik@google.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      87291347
  6. 20 3月, 2014 3 次提交
  7. 19 3月, 2014 2 次提交
    • D
      libata, libsas: kill pm_result and related cleanup · bc6e7c4b
      Dan Williams 提交于
      Tejun says:
        "At least for libata, worrying about suspend/resume failures don't make
         whole lot of sense.  If suspend failed, just proceed with suspend.  If
         the device can't be woken up afterwards, that's that.  There isn't
         anything we could have done differently anyway.  The same for resume, if
         spinup fails, the device is dud and the following commands will invoke
         EH actions and will eventually fail.  Again, there really isn't any
         *choice* to make.  Just making sure the errors are handled gracefully
         (ie. don't crash) and the following commands are handled correctly
         should be enough."
      
      The only libata user that actually cares about the result from a suspend
      operation is libsas.  However, it only cares about whether queuing a new
      operation collides with an in-flight one.  All libsas does with the
      error is retry, but we can just let libata wait for the previous
      operation before continuing.
      
      Other cleanups include:
      1/ Unifying all ata port pm operations on an ata_port_pm_ prefix
      2/ Marking all ata port pm helper routines as returning void, only
         ata_port_pm_ entry points need to fake a 0 return value.
      3/ Killing ata_port_{suspend|resume}_common() in favor of calling
         ata_port_request_pm() directly
      4/ Killing the wrappers that just do a to_ata_port() conversion
      5/ Clearly marking the entry points that do async operations with an
        _async suffix.
      
      Reference: http://marc.info/?l=linux-scsi&m=138995409532286&w=2
      
      Cc: Phillip Susi <psusi@ubuntu.com>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Suggested-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NTodd Brandt <todd.e.brandt@intel.com>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      bc6e7c4b
    • B
      net: cdc_ncm: fix control message ordering · ff0992e9
      Bjørn Mork 提交于
      This is a context modified revert of commit 6a9612e2
      ("net: cdc_ncm: remove ncm_parm field") which introduced
      a NCM specification violation, causing setup errors for
      some devices. These errors resulted in the device and
      host disagreeing about shared settings, with complete
      failure to communicate as the end result.
      
      The NCM specification require that many of the NCM specific
      control reuests are sent only while the NCM Data Interface
      is in alternate setting 0. Reverting the commit ensures that
      we follow this requirement.
      
      Fixes: 6a9612e2 ("net: cdc_ncm: remove ncm_parm field")
      Reported-and-tested-by: NPasi Kärkkäinen <pasik@iki.fi>
      Reported-by: NThomas Schäfer <tschaefer@t-online.de>
      Signed-off-by: NBjørn Mork <bjorn@mork.no>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ff0992e9
  8. 11 3月, 2014 3 次提交
  9. 10 3月, 2014 3 次提交
    • A
      get rid of fget_light() · bd2a31d5
      Al Viro 提交于
      instead of returning the flags by reference, we can just have the
      low-level primitive return those in lower bits of unsigned long,
      with struct file * derived from the rest.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      bd2a31d5
    • L
      vfs: atomic f_pos accesses as per POSIX · 9c225f26
      Linus Torvalds 提交于
      Our write() system call has always been atomic in the sense that you get
      the expected thread-safe contiguous write, but we haven't actually
      guaranteed that concurrent writes are serialized wrt f_pos accesses, so
      threads (or processes) that share a file descriptor and use "write()"
      concurrently would quite likely overwrite each others data.
      
      This violates POSIX.1-2008/SUSv4 Section XSI 2.9.7 that says:
      
       "2.9.7 Thread Interactions with Regular File Operations
      
        All of the following functions shall be atomic with respect to each
        other in the effects specified in POSIX.1-2008 when they operate on
        regular files or symbolic links: [...]"
      
      and one of the effects is the file position update.
      
      This unprotected file position behavior is not new behavior, and nobody
      has ever cared.  Until now.  Yongzhi Pan reported unexpected behavior to
      Michael Kerrisk that was due to this.
      
      This resolves the issue with a f_pos-specific lock that is taken by
      read/write/lseek on file descriptors that may be shared across threads
      or processes.
      Reported-by: NYongzhi Pan <panyongzhi@gmail.com>
      Reported-by: NMichael Kerrisk <mtk.manpages@gmail.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      9c225f26
    • N
      selinux: add gfp argument to security_xfrm_policy_alloc and fix callers · 52a4c640
      Nikolay Aleksandrov 提交于
      security_xfrm_policy_alloc can be called in atomic context so the
      allocation should be done with GFP_ATOMIC. Add an argument to let the
      callers choose the appropriate way. In order to do so a gfp argument
      needs to be added to the method xfrm_policy_alloc_security in struct
      security_operations and to the internal function
      selinux_xfrm_alloc_user. After that switch to GFP_ATOMIC in the atomic
      callers and leave GFP_KERNEL as before for the rest.
      The path that needed the gfp argument addition is:
      security_xfrm_policy_alloc -> security_ops.xfrm_policy_alloc_security ->
      all users of xfrm_policy_alloc_security (e.g. selinux_xfrm_policy_alloc) ->
      selinux_xfrm_alloc_user (here the allocation used to be GFP_KERNEL only)
      
      Now adding a gfp argument to selinux_xfrm_alloc_user requires us to also
      add it to security_context_to_sid which is used inside and prior to this
      patch did only GFP_KERNEL allocation. So add gfp argument to
      security_context_to_sid and adjust all of its callers as well.
      
      CC: Paul Moore <paul@paul-moore.com>
      CC: Dave Jones <davej@redhat.com>
      CC: Steffen Klassert <steffen.klassert@secunet.com>
      CC: Fan Du <fan.du@windriver.com>
      CC: David S. Miller <davem@davemloft.net>
      CC: LSM list <linux-security-module@vger.kernel.org>
      CC: SELinux list <selinux@tycho.nsa.gov>
      Signed-off-by: NNikolay Aleksandrov <nikolay@redhat.com>
      Acked-by: NPaul Moore <paul@paul-moore.com>
      Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
      52a4c640
  10. 07 3月, 2014 3 次提交
    • T
      workqueue: remove PREPARE_[DELAYED_]WORK() · f073f922
      Tejun Heo 提交于
      Peter Hurley noticed that since a2c1c57b ("workqueue: consider
      work function when searching for busy work items"), a work item which
      gets assigned a different work function would break out of the
      non-reentrancy guarantee as workqueue would consider it a different
      work item.
      
      This is fragile and extremely subtle.  PREPARE_[DELAYED_]WORK() have
      never been used widely and its semantics has always been somewhat
      iffy.  If the work item is known not to be on queue when
      PREPARE_WORK() is called, there's no difference from using
      INIT_WORK().  If the work item may be queued at the time of
      PREPARE_WORK(), we can't really tell whether the old or new function
      will be executed the next time.
      
      We really don't want this level of subtlety in workqueue interface for
      such marginal use cases.  The previous patches converted all existing
      users away from PREPARE_[DELAYED_]WORK().  Let's remove them.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Peter Hurley <peter@hurleysoftware.com>
      Link: http://lkml.kernel.org/g/1392493119-9277-1-git-send-email-peter@hurleysoftware.com
      f073f922
    • T
      nvme: don't use PREPARE_WORK · 9ca97374
      Tejun Heo 提交于
      PREPARE_[DELAYED_]WORK() are being phased out.  They have few users
      and a nasty surprise in terms of reentrancy guarantee as workqueue
      considers work items to be different if they don't have the same work
      function.
      
      nvme_dev->reset_work is multiplexed with multiple work functions.
      Introduce nvme_reset_workfn() which invokes nvme_dev->reset_workfn and
      always use it as the work function and update the users to set the
      ->reset_workfn field instead of overriding the work function using
      PREPARE_WORK().
      
      It would probably be best to route this with other related updates
      through the workqueue tree.
      
      Compile tested.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Matthew Wilcox <willy@linux.intel.com>
      Cc: linux-nvme@lists.infradead.org
      9ca97374
    • T
      firewire: don't use PREPARE_DELAYED_WORK · 70044d71
      Tejun Heo 提交于
      PREPARE_[DELAYED_]WORK() are being phased out.  They have few users
      and a nasty surprise in terms of reentrancy guarantee as workqueue
      considers work items to be different if they don't have the same work
      function.
      
      firewire core-device and sbp2 have been been multiplexing work items
      with multiple work functions.  Introduce fw_device_workfn() and
      sbp2_lu_workfn() which invoke fw_device->workfn and
      sbp2_logical_unit->workfn respectively and always use the two
      functions as the work functions and update the users to set the
      ->workfn fields instead of overriding work functions using
      PREPARE_DELAYED_WORK().
      
      This fixes a variety of possible regressions since a2c1c57b
      "workqueue: consider work function when searching for busy work items"
      due to which fw_workqueue lost its required non-reentrancy property.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NStefan Richter <stefanr@s5r6.in-berlin.de>
      Cc: linux1394-devel@lists.sourceforge.net
      Cc: stable@vger.kernel.org # v3.9+
      Cc: stable@vger.kernel.org # v3.8.2+
      Cc: stable@vger.kernel.org # v3.4.60+
      Cc: stable@vger.kernel.org # v3.2.40+
      70044d71
  11. 06 3月, 2014 7 次提交
  12. 05 3月, 2014 3 次提交
    • M
      efi: Add separate 32-bit/64-bit definitions · 677703ce
      Matt Fleming 提交于
      The traditional approach of using machine-specific types such as
      'unsigned long' does not allow the kernel to interact with firmware
      running in a different CPU mode, e.g. 64-bit kernel with 32-bit EFI.
      
      Add distinct EFI structure definitions for both 32-bit and 64-bit so
      that we can use them in the 32-bit and 64-bit code paths.
      Acked-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      677703ce
    • M
      ia64/efi: Implement efi_enabled() · 09206380
      Matt Fleming 提交于
      There's no good reason to keep efi_enabled() under CONFIG_X86 anymore,
      since nothing about the implementation is specific to x86.
      
      Set EFI feature flags in the ia64 boot path instead of claiming to
      support all features. The old behaviour was actually buggy since
      efi.memmap never points to a valid memory map, so we shouldn't be
      claiming to support EFI_MEMMAP.
      
      Fortunately, this bug was never triggered because EFI_MEMMAP isn't used
      outside of arch/x86 currently, but that may not always be the case.
      Reviewed-and-tested-by: NTony Luck <tony.luck@intel.com>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      09206380
    • M
      efi: Move facility flags to struct efi · 3e909599
      Matt Fleming 提交于
      As we grow support for more EFI architectures they're going to want the
      ability to query which EFI features are available on the running system.
      Instead of storing this information in an architecture-specific place,
      stick it in the global 'struct efi', which is already the central
      location for EFI state.
      
      While we're at it, let's change the return value of efi_enabled() to be
      bool and replace all references to 'facility' with 'feature', which is
      the usual word used to describe the attributes of the running system.
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      3e909599
  13. 04 3月, 2014 5 次提交
    • L
      mm: numa: bugfix for LAST_CPUPID_NOT_IN_PAGE_FLAGS · 1ae71d03
      Liu Ping Fan 提交于
      When doing some numa tests on powerpc, I triggered an oops bug.  I find
      it is caused by using page->_last_cpupid.  It should be initialized as
      "-1 & LAST_CPUPID_MASK", but not "-1".  Otherwise, in task_numa_fault(),
      we will miss the checking (last_cpupid == (-1 & LAST_CPUPID_MASK)).  And
      finally cause an oops bug in task_numa_group(), since the online cpu is
      less than possible cpu.  This happen with CONFIG_SPARSE_VMEMMAP disabled
      
      Call trace:
      
        SMP NR_CPUS=64 NUMA PowerNV
        Modules linked in:
        CPU: 24 PID: 804 Comm: systemd-udevd Not tainted3.13.0-rc1+ #32
        task: c000001e2746aa80 ti: c000001e32c50000 task.ti:c000001e32c50000
        REGS: c000001e32c53510 TRAP: 0300   Not tainted(3.13.0-rc1+)
        MSR: 9000000000009032 <SF,HV,EE,ME,IR,DR,RI>  CR:28024424  XER: 20000000
        CFAR: c000000000009324 DAR: 7265717569726857 DSISR:40000000 SOFTE: 1
        NIP  .task_numa_fault+0x1470/0x2370
        LR  .task_numa_fault+0x1468/0x2370
        Call Trace:
         .task_numa_fault+0x1468/0x2370 (unreliable)
         .do_numa_page+0x480/0x4a0
         .handle_mm_fault+0x4ec/0xc90
         .do_page_fault+0x3a8/0x890
         handle_page_fault+0x10/0x30
        Instruction dump:
        3c82fefb 3884b138 48d9cff1 60000000 48000574 3c62fefb3863af78 3c82fefb
        3884b138 48d9cfd5 60000000 e93f0100 <812902e4> 7d2907b45529063e 7d2a07b4
        ---[ end trace 15f2510da5ae07cf ]---
      Signed-off-by: NLiu Ping Fan <pingfank@linux.vnet.ibm.com>
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1ae71d03
    • V
      mm: include VM_MIXEDMAP flag in the VM_SPECIAL list to avoid m(un)locking · 9050d7eb
      Vlastimil Babka 提交于
      Daniel Borkmann reported a VM_BUG_ON assertion failing:
      
        ------------[ cut here ]------------
        kernel BUG at mm/mlock.c:528!
        invalid opcode: 0000 [#1] SMP
        Modules linked in: ccm arc4 iwldvm [...]
         video
        CPU: 3 PID: 2266 Comm: netsniff-ng Not tainted 3.14.0-rc2+ #8
        Hardware name: LENOVO 2429BP3/2429BP3, BIOS G4ET37WW (1.12 ) 05/29/2012
        task: ffff8801f87f9820 ti: ffff88002cb44000 task.ti: ffff88002cb44000
        RIP: 0010:[<ffffffff81171ad0>]  [<ffffffff81171ad0>] munlock_vma_pages_range+0x2e0/0x2f0
        Call Trace:
          do_munmap+0x18f/0x3b0
          vm_munmap+0x41/0x60
          SyS_munmap+0x22/0x30
          system_call_fastpath+0x1a/0x1f
        RIP   munlock_vma_pages_range+0x2e0/0x2f0
        ---[ end trace a0088dcf07ae10f2 ]---
      
      because munlock_vma_pages_range() thinks it's unexpectedly in the middle
      of a THP page.  This can be reproduced with default config since 3.11
      kernels.  A reproducer can be found in the kernel's selftest directory
      for networking by running ./psock_tpacket.
      
      The problem is that an order=2 compound page (allocated by
      alloc_one_pg_vec_page() is part of the munlocked VM_MIXEDMAP vma (mapped
      by packet_mmap()) and mistaken for a THP page and assumed to be order=9.
      
      The checks for THP in munlock came with commit ff6a6da6 ("mm:
      accelerate munlock() treatment of THP pages"), i.e.  since 3.9, but did
      not trigger a bug.  It just makes munlock_vma_pages_range() skip such
      compound pages until the next 512-pages-aligned page, when it encounters
      a head page.  This is however not a problem for vma's where mlocking has
      no effect anyway, but it can distort the accounting.
      
      Since commit 7225522b ("mm: munlock: batch non-THP page isolation
      and munlock+putback using pagevec") this can trigger a VM_BUG_ON in
      PageTransHuge() check.
      
      This patch fixes the issue by adding VM_MIXEDMAP flag to VM_SPECIAL, a
      list of flags that make vma's non-mlockable and non-mergeable.  The
      reasoning is that VM_MIXEDMAP vma's are similar to VM_PFNMAP, which is
      already on the VM_SPECIAL list, and both are intended for non-LRU pages
      where mlocking makes no sense anyway.  Related Lkml discussion can be
      found in [2].
      
       [1] tools/testing/selftests/net/psock_tpacket
       [2] https://lkml.org/lkml/2014/1/10/427Signed-off-by: NVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Reported-by: NDaniel Borkmann <dborkman@redhat.com>
      Tested-by: NDaniel Borkmann <dborkman@redhat.com>
      Cc: Thomas Hellstrom <thellstrom@vmware.com>
      Cc: John David Anglin <dave.anglin@bell.net>
      Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
      Cc: Konstantin Khlebnikov <khlebnikov@openvz.org>
      Cc: Carsten Otte <cotte@de.ibm.com>
      Cc: Jared Hulbert <jaredeh@gmail.com>
      Tested-by: NHannes Frederic Sowa <hannes@stressinduktion.org>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: <stable@vger.kernel.org> [3.11.x+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9050d7eb
    • D
      mm: close PageTail race · 668f9abb
      David Rientjes 提交于
      Commit bf6bddf1 ("mm: introduce compaction and migration for
      ballooned pages") introduces page_count(page) into memory compaction
      which dereferences page->first_page if PageTail(page).
      
      This results in a very rare NULL pointer dereference on the
      aforementioned page_count(page).  Indeed, anything that does
      compound_head(), including page_count() is susceptible to racing with
      prep_compound_page() and seeing a NULL or dangling page->first_page
      pointer.
      
      This patch uses Andrea's implementation of compound_trans_head() that
      deals with such a race and makes it the default compound_head()
      implementation.  This includes a read memory barrier that ensures that
      if PageTail(head) is true that we return a head page that is neither
      NULL nor dangling.  The patch then adds a store memory barrier to
      prep_compound_page() to ensure page->first_page is set.
      
      This is the safest way to ensure we see the head page that we are
      expecting, PageTail(page) is already in the unlikely() path and the
      memory barriers are unfortunately required.
      
      Hugetlbfs is the exception, we don't enforce a store memory barrier
      during init since no race is possible.
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Cc: Holger Kiehl <Holger.Kiehl@dwd.de>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Rafael Aquini <aquini@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      668f9abb
    • H
      s390/compat: automatic zero, sign and pointer conversion of syscalls · ab4f8bba
      Heiko Carstens 提交于
      Instead of explicitly changing compat system call parameters from e.g.
      unsigned long to compat_ulong_t let the COMPAT_SYSCALL_WRAP macros
      automatically detect (unsigned) long parameters and zero and sign
      extend them automatically.
      The resulting binary is completely identical.
      
      In addition add a sys_[system call name] prototype for each system call
      wrapper. This will cause compile errors if the prototype does not match
      the prototype in include/linux/syscall.h.
      Therefore we should now always get the correct zero and sign extension
      of system call parameters. Pointers are handled like before.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      ab4f8bba
    • H
      compat: add COMPAT_SYSCALL_DEFINE0 macro · 217f4433
      Heiko Carstens 提交于
      For consistency reason add a COMPAT_SYSCALL_DEFINE0 macro.
      This macro should be used for compat system calls with zero parameters.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      217f4433