1. 02 10月, 2014 2 次提交
  2. 30 9月, 2014 11 次提交
  3. 25 9月, 2014 7 次提交
    • W
      powerpc/eeh: Fix kernel crash when passing through VF · 2a58222f
      Wei Yang 提交于
      When doing vfio passthrough a VF, the kernel will crash with following
      message:
      
      [  442.656459] Unable to handle kernel paging request for data at address 0x00000060
      [  442.656593] Faulting instruction address: 0xc000000000038b88
      [  442.656706] Oops: Kernel access of bad area, sig: 11 [#1]
      [  442.656798] SMP NR_CPUS=1024 NUMA PowerNV
      [  442.656890] Modules linked in: vfio_pci mlx4_core nf_conntrack_netbios_ns nf_conntrack_broadcast ipt_MASQUERADE ip6t_REJECT xt_conntrack bnep bluetooth rfkill ebtable_nat ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw tg3 nfsd be2net nfs_acl ses lockd ptp enclosure pps_core kvm_hv kvm_pr shpchp binfmt_misc kvm sunrpc uinput lpfc scsi_transport_fc ipr scsi_tgt [last unloaded: mlx4_core]
      [  442.658152] CPU: 40 PID: 14948 Comm: qemu-system-ppc Not tainted 3.10.42yw-pkvm+ #37
      [  442.658219] task: c000000f7e2a9a00 ti: c000000f6dc3c000 task.ti: c000000f6dc3c000
      [  442.658287] NIP: c000000000038b88 LR: c0000000004435a8 CTR: c000000000455bc0
      [  442.658352] REGS: c000000f6dc3f580 TRAP: 0300   Not tainted  (3.10.42yw-pkvm+)
      [  442.658419] MSR: 9000000000009032 <SF,HV,EE,ME,IR,DR,RI>  CR: 28004882  XER: 20000000
      [  442.658577] CFAR: c00000000000908c DAR: 0000000000000060 DSISR: 40000000 SOFTE: 1
      GPR00: c0000000004435a8 c000000f6dc3f800 c0000000012b1c10 c00000000da24000
      GPR04: 0000000000000003 0000000000001004 00000000000015b3 000000000000ffff
      GPR08: c00000000127f5d8 0000000000000000 000000000000ffff 0000000000000000
      GPR12: c000000000068078 c00000000fdd6800 000001003c320c80 000001003c3607f0
      GPR16: 0000000000000001 00000000105480c8 000000001055aaa8 000001003c31ab18
      GPR20: 000001003c10fb40 000001003c360ae8 000000001063bcf0 000000001063bdb0
      GPR24: 000001003c15ed70 0000000010548f40 c000001fe5514c88 c000001fe5514cb0
      GPR28: c00000000da24000 0000000000000000 c00000000da24000 0000000000000003
      [  442.659471] NIP [c000000000038b88] .pcibios_set_pcie_reset_state+0x28/0x130
      [  442.659530] LR [c0000000004435a8] .pci_set_pcie_reset_state+0x28/0x40
      [  442.659585] Call Trace:
      [  442.659610] [c000000f6dc3f800] [00000000000719e0] 0x719e0 (unreliable)
      [  442.659677] [c000000f6dc3f880] [c0000000004435a8] .pci_set_pcie_reset_state+0x28/0x40
      [  442.659757] [c000000f6dc3f900] [c000000000455bf8] .reset_fundamental+0x38/0x80
      [  442.659835] [c000000f6dc3f980] [c0000000004562a8] .pci_dev_specific_reset+0xa8/0xf0
      [  442.659913] [c000000f6dc3fa00] [c0000000004448c4] .__pci_dev_reset+0x44/0x430
      [  442.659980] [c000000f6dc3fab0] [c000000000444d5c] .pci_reset_function+0x7c/0xc0
      [  442.660059] [c000000f6dc3fb30] [d00000001c141ab8] .vfio_pci_open+0xe8/0x2b0 [vfio_pci]
      [  442.660139] [c000000f6dc3fbd0] [c000000000586c30] .vfio_group_fops_unl_ioctl+0x3a0/0x630
      [  442.660219] [c000000f6dc3fc90] [c000000000255fbc] .do_vfs_ioctl+0x4ec/0x7c0
      [  442.660286] [c000000f6dc3fd80] [c000000000256364] .SyS_ioctl+0xd4/0xf0
      [  442.660354] [c000000f6dc3fe30] [c000000000009e54] syscall_exit+0x0/0x98
      [  442.660420] Instruction dump:
      [  442.660454] 4bfffce9 4bfffee4 7c0802a6 fbc1fff0 fbe1fff8 f8010010 f821ff81 7c7e1b78
      [  442.660566] 7c9f2378 60000000 60000000 e93e02c8 <e8690060> 2fa30000 41de00c4 2b9f0002
      [  442.660679] ---[ end trace a64ac9546bcf0328 ]---
      [  442.660724]
      
      The reason is current VF is not EEH enabled.
      
      This patch introduces a macro to convert eeh_dev to eeh_pe. By doing so, it
      will prevent converting with NULL pointer.
      Signed-off-by: NWei Yang <weiyang@linux.vnet.ibm.com>
      Acked-by: NGavin Shan <gwshan@linux.vnet.ibm.com>
      CC: Michael Ellerman <mpe@ellerman.id.au>
      
      V3 -> V4:
         1. move the macro definition from include/linux/pci.h to
            arch/powerpc/include/asm/eeh.h
      
      V2 -> V3:
         1. rebased on 3.17-rc4
         2. introduce a macro
         3. use this macro in several other places
      
      V1 -> V2:
         1. code style and patch subject adjustment
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      2a58222f
    • P
      powerpc: Emulate icbi, mcrf and conditional-trap instructions · cf87c3f6
      Paul Mackerras 提交于
      This extends the instruction emulation done by analyse_instr() and
      emulate_step() to handle a few more instructions that are found in
      the kernel.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      cf87c3f6
    • P
      powerpc: Split out instruction analysis part of emulate_step() · be96f633
      Paul Mackerras 提交于
      This splits out the instruction analysis part of emulate_step() into
      a separate analyse_instr() function, which decodes the instruction,
      but doesn't execute any load or store instructions.  It does execute
      integer instructions and branches which can be executed purely by
      updating register values in the pt_regs struct.  For other instructions,
      it returns the instruction type and other details in a new
      instruction_op struct.  emulate_step() then uses that information
      to execute loads, stores, cache operations, mfmsr, mtmsr[d], and
      (on 64-bit) sc instructions.
      
      The reason for doing this is so that the KVM code can use it instead
      of having its own separate instruction emulation code.  Possibly the
      alignment interrupt handler could also use this.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      be96f633
    • P
      powerpc/powernv: Don't call generic code on offline cpus · d6a4f709
      Paul Mackerras 提交于
      On PowerNV platforms, when a CPU is offline, we put it into nap mode.
      It's possible that the CPU wakes up from nap mode while it is still
      offline due to a stray IPI.  A misdirected device interrupt could also
      potentially cause it to wake up.  In that circumstance, we need to clear
      the interrupt so that the CPU can go back to nap mode.
      
      In the past the clearing of the interrupt was accomplished by briefly
      enabling interrupts and allowing the normal interrupt handling code
      (do_IRQ() etc.) to handle the interrupt.  This has the problem that
      this code calls irq_enter() and irq_exit(), which call functions such
      as account_system_vtime() which use RCU internally.  Use of RCU is not
      permitted on offline CPUs and will trigger errors if RCU checking is
      enabled.
      
      To avoid calling into any generic code which might use RCU, we adopt
      a different method of clearing interrupts on offline CPUs.  Since we
      are on the PowerNV platform, we know that the system interrupt
      controller is a XICS being driven directly (i.e. not via hcalls) by
      the kernel.  Hence this adds a new icp_native_flush_interrupt()
      function to the native-mode XICS driver and arranges to call that
      when an offline CPU is woken from nap.  This new function reads the
      interrupt from the XICS.  If it is an IPI, it clears the IPI; if it
      is a device interrupt, it prints a warning and disables the source.
      Then it does the end-of-interrupt processing for the interrupt.
      
      The other thing that briefly enabling interrupts did was to check and
      clear the irq_happened flag in this CPU's PACA.  Therefore, after
      flushing the interrupt from the XICS, we also clear all bits except
      the PACA_IRQ_HARD_DIS (interrupts are hard disabled) bit from the
      irq_happened flag.  The PACA_IRQ_HARD_DIS flag is set by power7_nap()
      and is left set to indicate that interrupts are hard disabled.  This
      means we then have to ignore that flag in power7_nap(), which is
      reasonable since it doesn't indicate that any interrupt event needs
      servicing.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      d6a4f709
    • A
      powerpc: Move htab_remove_mapping function prototype into header file · f6026df1
      Anton Blanchard 提交于
      A recent patch added a function prototype for htab_remove_mapping in
      c code. Fix it.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      f6026df1
    • A
      powerpc: Remove stale function prototypes · a38efcea
      Anton Blanchard 提交于
      There were a number of prototypes for functions that no longer
      exist. Remove them.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      a38efcea
    • M
      powerpc/powernv: Add OPAL check token call · bffe6bda
      Michael Neuling 提交于
      Currently there is no way to generically check if an OPAL call exists or not
      from the host kernel.
      
      This adds an OPAL call opal_check_token() which tells you if the given token is
      present in OPAL or not.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      bffe6bda
  4. 09 9月, 2014 2 次提交
    • P
      powerpc: Wire up sys_seccomp(), sys_getrandom() and sys_memfd_create() · 7d59deb5
      Pranith Kumar 提交于
      This patch wires up three new syscalls for powerpc. The three
      new syscalls are seccomp, getrandom and memfd_create.
      Signed-off-by: NPranith Kumar <bobby.prani@gmail.com>
      Reviewed-by: NDavid Herrmann <dh.herrmann@gmail.com>
      7d59deb5
    • A
      powerpc/perf: Fix ABIv2 kernel backtraces · 85101af1
      Anton Blanchard 提交于
      ABIv2 kernels are failing to backtrace through the kernel. An example:
      
      39.30%  readseek2_proce  [kernel.kallsyms]    [k] find_get_entry
                  |
                  --- find_get_entry
                     __GI___libc_read
      
      The problem is in valid_next_sp() where we check that the new stack
      pointer is at least STACK_FRAME_OVERHEAD below the previous one.
      
      ABIv1 has a minimum stack frame size of 112 bytes consisting of 48 bytes
      and 64 bytes of parameter save area. ABIv2 changes that to 32 bytes
      with no paramter save area.
      
      STACK_FRAME_OVERHEAD is in theory the minimum stack frame size,
      but we over 240 uses of it, some of which assume that it includes
      space for the parameter area.
      
      We need to work through all our stack defines and rationalise them
      but let's fix perf now by creating STACK_FRAME_MIN_SIZE and using
      in valid_next_sp(). This fixes the issue:
      
      30.64%  readseek2_proce  [kernel.kallsyms]    [k] find_get_entry
                  |
                  --- find_get_entry
                     pagecache_get_page
                     generic_file_read_iter
                     new_sync_read
                     vfs_read
                     sys_read
                     syscall_exit
                     __GI___libc_read
      
      Cc: stable@vger.kernel.org # 3.16+
      Reported-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      85101af1
  5. 13 8月, 2014 7 次提交
    • A
      powerpc/mm: Use read barrier when creating real_pte · 85c1fafd
      Aneesh Kumar K.V 提交于
      On ppc64 we support 4K hash pte with 64K page size. That requires
      us to track the hash pte slot information on a per 4k basis. We do that
      by storing the slot details in the second half of pte page. The pte bit
      _PAGE_COMBO is used to indicate whether the second half need to be
      looked while building real_pte. We need to use read memory barrier while
      doing that so that load of hidx is not reordered w.r.t _PAGE_COMBO
      check. On the store side we already do a lwsync in __hash_page_4K
      
      CC: <stable@vger.kernel.org>
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      85c1fafd
    • A
      powerpc/thp: Handle combo pages in invalidate · fc047955
      Aneesh Kumar K.V 提交于
      If we changed base page size of the segment, either via sub_page_protect
      or via remap_4k_pfn, we do a demote_segment which doesn't flush the hash
      table entries. We do a lazy hash page table flush for all mapped pages
      in the demoted segment. This happens when we handle hash page fault for
      these pages.
      
      We use _PAGE_COMBO bit along with _PAGE_HASHPTE to indicate whether a
      pte is backed by 4K hash pte. If we find _PAGE_COMBO not set on the pte,
      that implies that we could possibly have older 64K hash pte entries in
      the hash page table and we need to invalidate those entries.
      
      Use _PAGE_COMBO to determine the page size with which we should
      invalidate the hash table entries on unmap.
      
      CC: <stable@vger.kernel.org>
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      fc047955
    • A
      powerpc/thp: Don't recompute vsid and ssize in loop on invalidate · fa1f8ae8
      Aneesh Kumar K.V 提交于
      The segment identifier and segment size will remain the same in
      the loop, So we can compute it outside. We also change the
      hugepage_invalidate interface so that we can use it the later patch
      
      CC: <stable@vger.kernel.org>
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      fa1f8ae8
    • N
      powerpc: remove duplicate definition of TEXASR_FS · 56758e3c
      Nishanth Aravamudan 提交于
      It appears that commits 7f06f21d ("powerpc/tm: Add checking to
      treclaim/trechkpt") and e4e38121 ("KVM: PPC: Book3S HV: Add
      transactional memory support") both added definitions of TEXASR_FS.
      Remove one of them. At the same time, fix the alignment of the remaining
      definition (should be tab-separated like the rest of the #defines).
      Signed-off-by: NNishanth Aravamudan <nacc@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      56758e3c
    • V
      powerpc/powernv: Interface to register/unregister opal dump region · b09c2ec4
      Vasant Hegde 提交于
      PowerNV platform is capable of capturing host memory region when system
      crashes (because of host/firmware). We have new OPAL API to register/
      unregister memory region to be captured when system crashes.
      
      This patch adds support for new API. Also during boot time we register
      kernel log buffer and unregister before doing kexec.
      Signed-off-by: NVasant Hegde <hegdevasant@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      b09c2ec4
    • M
      powerpc: Add POWER8 features to CPU_FTRS_POSSIBLE/ALWAYS · 3609e09f
      Michael Ellerman 提交于
      We have been a bit slack about updating the CPU_FTRS_POSSIBLE and
      CPU_FTRS_ALWAYS masks. When we added POWER8, and also POWER8E we forgot
      to update the ALWAYS mask. And when we added POWER8_DD1 we forgot to
      update both the POSSIBLE and ALWAYS masks.
      
      Luckily this hasn't caused any actual bugs AFAICS. Failing to update the
      ALWAYS mask just forgoes a potential optimisation opportunity. Failing
      to update the POSSIBLE mask for POWER8_DD1 is also OK because it only
      removes a bit rather than adding any.
      
      Regardless they should all be in both masks so as to avoid any future
      bugs when the set of ALWAYS/POSSIBLE bits changes, or the masks
      themselves change.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Acked-by: NMichael Neuling <mikey@neuling.org>
      Acked-by: NJoel Stanley <joel@jms.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      3609e09f
    • M
      powerpc: Add smp_mb() to arch_spin_is_locked() · 51d7d520
      Michael Ellerman 提交于
      The kernel defines the function spin_is_locked(), which can be used to
      check if a spinlock is currently locked.
      
      Using spin_is_locked() on a lock you don't hold is obviously racy. That
      is, even though you may observe that the lock is unlocked, it may become
      locked at any time.
      
      There is (at least) one exception to that, which is if two locks are
      used as a pair, and the holder of each checks the status of the other
      before doing any update.
      
      Assuming *A and *B are two locks, and *COUNTER is a shared non-atomic
      value:
      
      The first CPU does:
      
      	spin_lock(*A)
      
      	if spin_is_locked(*B)
      		# nothing
      	else
      		smp_mb()
      		LOAD r = *COUNTER
      		r++
      		STORE *COUNTER = r
      
      	spin_unlock(*A)
      
      And the second CPU does:
      
      	spin_lock(*B)
      
      	if spin_is_locked(*A)
      		# nothing
      	else
      		smp_mb()
      		LOAD r = *COUNTER
      		r++
      		STORE *COUNTER = r
      
      	spin_unlock(*B)
      
      Although this is a strange locking construct, it should work.
      
      It seems to be understood, but not documented, that spin_is_locked() is
      not a memory barrier, so in the examples above and below the caller
      inserts its own memory barrier before acting on the result of
      spin_is_locked().
      
      For now we assume spin_is_locked() is implemented as below, and we break
      it out in our examples:
      
      	bool spin_is_locked(*LOCK) {
      		LOAD l = *LOCK
      		return l.locked
      	}
      
      Our intuition is that there should be no problem even if the two code
      sequences run simultaneously such as:
      
      	CPU 0			CPU 1
      	==================================================
      	spin_lock(*A)		spin_lock(*B)
      	LOAD b = *B		LOAD a = *A
      	if b.locked # true	if a.locked # true
      	# nothing		# nothing
      	spin_unlock(*A)		spin_unlock(*B)
      
      If one CPU gets the lock before the other then it will do the update and
      the other CPU will back off:
      
      	CPU 0			CPU 1
      	==================================================
      	spin_lock(*A)
      	LOAD b = *B
      				spin_lock(*B)
      	if b.locked # false	LOAD a = *A
      	else			if a.locked # true
      	smp_mb()		# nothing
      	LOAD r1 = *COUNTER	spin_unlock(*B)
      	r1++
      	STORE *COUNTER = r1
      	spin_unlock(*A)
      
      However in reality spin_lock() itself is not indivisible. On powerpc we
      implement it as a load-and-reserve and store-conditional.
      
      Ignoring the retry logic for the lost reservation case, it boils down to:
      	spin_lock(*LOCK) {
      		LOAD l = *LOCK
      		l.locked = true
      		STORE *LOCK = l
      		ACQUIRE_BARRIER
      	}
      
      The ACQUIRE_BARRIER is required to give spin_lock() ACQUIRE semantics as
      defined in memory-barriers.txt:
      
           This acts as a one-way permeable barrier.  It guarantees that all
           memory operations after the ACQUIRE operation will appear to happen
           after the ACQUIRE operation with respect to the other components of
           the system.
      
      On modern powerpc systems we use lwsync for ACQUIRE_BARRIER. lwsync is
      also know as "lightweight sync", or "sync 1".
      
      As described in Power ISA v2.07 section B.2.1.1, in this scenario the
      lwsync is not the barrier itself. It instead causes the LOAD of *LOCK to
      act as the barrier, preventing any loads or stores in the locked region
      from occurring prior to the load of *LOCK.
      
      Whether this behaviour is in accordance with the definition of ACQUIRE
      semantics in memory-barriers.txt is open to discussion, we may switch to
      a different barrier in future.
      
      What this means in practice is that the following can occur:
      
      	CPU 0			CPU 1
      	==================================================
      	LOAD a = *A 		LOAD b = *B
      	a.locked = true		b.locked = true
      	LOAD b = *B		LOAD a = *A
      	STORE *A = a		STORE *B = b
      	if b.locked # false	if a.locked # false
      	else			else
      	smp_mb()		smp_mb()
      	LOAD r1 = *COUNTER	LOAD r2 = *COUNTER
      	r1++			r2++
      	STORE *COUNTER = r1
      				STORE *COUNTER = r2	# Lost update
      	spin_unlock(*A)		spin_unlock(*B)
      
      That is, the load of *B can occur prior to the store that makes *A
      visibly locked. And similarly for CPU 1. The result is both CPUs hold
      their lock and believe the other lock is unlocked.
      
      The easiest fix for this is to add a full memory barrier to the start of
      spin_is_locked(), so adding to our previous definition would give us:
      
      	bool spin_is_locked(*LOCK) {
      		smp_mb()
      		LOAD l = *LOCK
      		return l.locked
      	}
      
      The new barrier orders the store to the lock we are locking vs the load
      of the other lock:
      
      	CPU 0			CPU 1
      	==================================================
      	LOAD a = *A 		LOAD b = *B
      	a.locked = true		b.locked = true
      	STORE *A = a		STORE *B = b
      	smp_mb()		smp_mb()
      	LOAD b = *B		LOAD a = *A
      	if b.locked # true	if a.locked # true
      	# nothing		# nothing
      	spin_unlock(*A)		spin_unlock(*B)
      
      Although the above example is theoretical, there is code similar to this
      example in sem_lock() in ipc/sem.c. This commit in addition to the next
      commit appears to be a fix for crashes we are seeing in that code where
      we believe this race happens in practice.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      51d7d520
  6. 09 8月, 2014 2 次提交
    • A
      arm64,ia64,ppc,s390,sh,tile,um,x86,mm: remove default gate area · a6c19dfe
      Andy Lutomirski 提交于
      The core mm code will provide a default gate area based on
      FIXADDR_USER_START and FIXADDR_USER_END if
      !defined(__HAVE_ARCH_GATE_AREA) && defined(AT_SYSINFO_EHDR).
      
      This default is only useful for ia64.  arm64, ppc, s390, sh, tile, 64-bit
      UML, and x86_32 have their own code just to disable it.  arm, 32-bit UML,
      and x86_64 have gate areas, but they have their own implementations.
      
      This gets rid of the default and moves the code into ia64.
      
      This should save some code on architectures without a gate area: it's now
      possible to inline the gate_area functions in the default case.
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Acked-by: NNathan Lynch <nathan_lynch@mentor.com>
      Acked-by: NH. Peter Anvin <hpa@linux.intel.com>
      Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [in principle]
      Acked-by: Richard Weinberger <richard@nod.at> [for um]
      Acked-by: Will Deacon <will.deacon@arm.com> [for arm64]
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Nathan Lynch <Nathan_Lynch@mentor.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a6c19dfe
    • L
      lib/scatterlist: make ARCH_HAS_SG_CHAIN an actual Kconfig · 308c09f1
      Laura Abbott 提交于
      Rather than have architectures #define ARCH_HAS_SG_CHAIN in an
      architecture specific scatterlist.h, make it a proper Kconfig option and
      use that instead.  At same time, remove the header files are are now
      mostly useless and just include asm-generic/scatterlist.h.
      
      [sfr@canb.auug.org.au: powerpc files now need asm/dma.h]
      Signed-off-by: NLaura Abbott <lauraa@codeaurora.org>
      Acked-by: Thomas Gleixner <tglx@linutronix.de>			[x86]
      Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>	[powerpc]
      Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "James E.J. Bottomley" <JBottomley@parallels.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      308c09f1
  7. 05 8月, 2014 9 次提交