1. 19 10月, 2017 1 次提交
  2. 06 9月, 2017 1 次提交
  3. 30 8月, 2017 2 次提交
  4. 29 8月, 2017 8 次提交
  5. 07 8月, 2017 1 次提交
  6. 20 7月, 2017 1 次提交
  7. 07 7月, 2017 1 次提交
    • P
      mm/hugetlb: add size parameter to huge_pte_offset() · 7868a208
      Punit Agrawal 提交于
      A poisoned or migrated hugepage is stored as a swap entry in the page
      tables.  On architectures that support hugepages consisting of
      contiguous page table entries (such as on arm64) this leads to ambiguity
      in determining the page table entry to return in huge_pte_offset() when
      a poisoned entry is encountered.
      
      Let's remove the ambiguity by adding a size parameter to convey
      additional information about the requested address.  Also fixup the
      definition/usage of huge_pte_offset() throughout the tree.
      
      Link: http://lkml.kernel.org/r/20170522133604.11392-4-punit.agrawal@arm.comSigned-off-by: NPunit Agrawal <punit.agrawal@arm.com>
      Acked-by: NSteve Capper <steve.capper@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: James Hogan <james.hogan@imgtec.com> (odd fixer:METAG ARCHITECTURE)
      Cc: Ralf Baechle <ralf@linux-mips.org> (supporter:MIPS)
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Rich Felker <dalias@libc.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7868a208
  8. 30 6月, 2017 1 次提交
    • P
      MIPS: Perform post-DMA cache flushes on systems with MAARs · cad482c1
      Paul Burton 提交于
      Recent CPUs from Imagination Technologies such as the I6400 or P6600 are
      able to speculatively fetch data from memory into caches. This means
      that if used in a system with non-coherent DMA they require that caches
      be invalidated after a device performs DMA, and before the CPU reads the
      DMA'd data, in order to ensure that stale values weren't speculatively
      prefetched.
      
      Such CPUs also introduced Memory Accessibility Attribute Registers
      (MAARs) in order to control the regions in which they are allowed to
      speculate. Thus we can use the presence of MAARs as a good indication
      that the CPU requires the above cache maintenance. Use the presence of
      MAARs to determine the result of cpu_needs_post_dma_flush() in the
      default case, in order to handle these recent CPUs correctly.
      
      Note that the return type of cpu_needs_post_dma_flush() is changed to
      bool, such that it's clearer what's happening when cpu_has_maar is cast
      to bool for the return value. If this patch were backported to a
      pre-v4.7 kernel then MIPS_CPU_MAAR was 1ull<<34, so when cast to an int
      we would incorrectly return 0. It so happens that MIPS_CPU_MAAR is
      currently 1ull<<30, so when truncated to an int gives a non-zero value
      anyway, but even so the implicit conversion from long long int to bool
      makes it clearer to understand what will happen than the implicit
      conversion from long long int to int would. The bool return type also
      fits this usage better semantically, so seems like an all-round win.
      
      Thanks to Ed for spotting the issue for pre-v4.7 kernels & suggesting
      the return type change.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Reviewed-by: NBryan O'Donoghue <pure.logic@nexus-software.ie>
      Tested-by: NBryan O'Donoghue <pure.logic@nexus-software.ie>
      Cc: Ed Blake <ed.blake@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/16363/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      cad482c1
  9. 29 6月, 2017 2 次提交
    • P
      MIPS: Use current_cpu_type() in m4kc_tlbp_war() · 5f930860
      Paul Burton 提交于
      Use current_cpu_type() to check for 4Kc processors instead of checking
      the PRID directly. This will allow for the 4Kc case to be optimised out
      of kernels that can't run on 4KC processors, thanks to __get_cpu_type()
      and its unreachable() call.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/16205/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      5f930860
    • P
      MIPS: Handle tlbex-tlbp race condition · f39878cc
      Paul Burton 提交于
      In systems where there are multiple actors updating the TLB, the
      potential exists for a race condition wherein a CPU hits a TLB exception
      but by the time it reaches a TLBP instruction the affected TLB entry may
      have been replaced. This can happen if, for example, a CPU shares the
      TLB between hardware threads (VPs) within a core and one of them
      replaces the entry that another has just taken a TLB exception for.
      
      We handle this race in the case of the Hardware Table Walker (HTW) being
      the other actor already, but didn't take into account the potential for
      multiple threads racing. Include the code for aborting TLB exception
      handling in affected multi-threaded systems, those being the I6400 &
      I6500 CPUs which share TLB entries between VPs.
      
      In the case of using RiXi without dedicated exceptions we have never
      handled this race even for HTW. This patch adds WARN()s to these cases
      which ought never to be hit because all CPUs with either HTW or shared
      FTLB RAMs also include dedicated RiXi exceptions, but the WARN()s will
      ensure this is always the case.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/16203/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      f39878cc
  10. 28 6月, 2017 4 次提交
    • P
      MIPS: Probe the I6500 CPU · 859aeb1b
      Paul Burton 提交于
      Introduce the I6500 PRID & probe it just the same way as I6400. The MIPS
      I6500 is the latest in Imagination Technologies' I-Class range of CPUs,
      with a focus on scalability & heterogeneity. It introduces the notion of
      multiple clusters to the MIPS Coherent Processing System, allowing for a
      far higher total number of cores & threads in a system when compared
      with its predecessors. Clusters don't need to be identical, and may
      contain differing numbers of cores & IOCUs, or cores with differing
      properties.
      
      This patch alone adds the basic support for booting Linux on an I6500
      CPU without support for any of its new functionality, for which support
      will be introduced in further patches.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/16190/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      859aeb1b
    • P
      MIPS: Perform post-DMA cache flushes on systems with MAARs · 498e9ade
      Paul Burton 提交于
      Recent CPUs from Imagination Technologies such as the I6400 or P6600 are
      able to speculatively fetch data from memory into caches. This means
      that if used in a system with non-coherent DMA they require that caches
      be invalidated after a device performs DMA, and before the CPU reads the
      DMA'd data, in order to ensure that stale values weren't speculatively
      prefetched.
      
      Such CPUs also introduced Memory Accessibility Attribute Registers
      (MAARs) in order to control the regions in which they are allowed to
      speculate. Thus we can use the presence of MAARs as a good indication
      that the CPU requires the above cache maintenance. Use the presence of
      MAARs to determine the result of cpu_needs_post_dma_flush() in the
      default case, in order to handle these recent CPUs correctly.
      
      Note that the return type of cpu_needs_post_dma_flush() is changed to
      bool, such that it's clearer what's happening when cpu_has_maar is cast
      to bool for the return value. If this patch were backported to a
      pre-v4.7 kernel then MIPS_CPU_MAAR was 1ull<<34, so when cast to an int
      we would incorrectly return 0. It so happens that MIPS_CPU_MAAR is
      currently 1ull<<30, so when truncated to an int gives a non-zero value
      anyway, but even so the implicit conversion from long long int to bool
      makes it clearer to understand what will happen than the implicit
      conversion from long long int to int would. The bool return type also
      fits this usage better semantically, so seems like an all-round win.
      
      Thanks to Ed for spotting the issue for pre-v4.7 kernels & suggesting
      the return type change.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Reviewed-by: NBryan O'Donoghue <pure.logic@nexus-software.ie>
      Tested-by: NBryan O'Donoghue <pure.logic@nexus-software.ie>
      Cc: Ed Blake <ed.blake@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/16363/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      498e9ade
    • D
      MIPS: Add some instructions to uasm. · dc190129
      David Daney 提交于
      Follow on patches for eBPF JIT require these additional instructions:
      
         insn_bgtz, insn_blez, insn_break, insn_ddivu, insn_dmultu,
         insn_dsbh, insn_dshd, insn_dsllv, insn_dsra32, insn_dsrav,
         insn_dsrlv, insn_lbu, insn_movn, insn_movz, insn_multu, insn_nor,
         insn_sb, insn_sh, insn_slti, insn_dinsu, insn_lwu
      
      ... so, add them.
      
      Sort the insn_* enumeration values alphabetically.
      Signed-off-by: NDavid Daney <david.daney@cavium.com>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: Matt Redfearn <matt.redfearn@imgtec.com>
      Cc: netdev@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/16367/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      dc190129
    • D
      MIPS: Optimize uasm insn lookup. · ce807d5f
      David Daney 提交于
      Instead of doing a linear search through the insn_table for each
      instruction, use the opcode as direct index into the table.  This will
      give constant time lookup performance as the number of supported
      opcodes increases.  Make the tables const as they are only ever read.
      For uasm-mips.c sort the table alphabetically, and remove duplicate
      entries, uasm-micromips.c was already sorted and duplicate free.
      There is a small savings in object size as struct insn loses a field:
      
      $ size arch/mips/mm/uasm-mips.o arch/mips/mm/uasm-mips.o.save
         text	   data	    bss	    dec	    hex	filename
        10040	      0	      0	  10040	   2738	arch/mips/mm/uasm-mips.o
         9240	   1120	      0	  10360	   2878	arch/mips/mm/uasm-mips.o.save
      Signed-off-by: NDavid Daney <david.daney@cavium.com>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: Matt Redfearn <matt.redfearn@imgtec.com>
      Cc: netdev@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/16365/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      ce807d5f
  11. 19 6月, 2017 1 次提交
    • H
      mm: larger stack guard gap, between vmas · 1be7107f
      Hugh Dickins 提交于
      Stack guard page is a useful feature to reduce a risk of stack smashing
      into a different mapping. We have been using a single page gap which
      is sufficient to prevent having stack adjacent to a different mapping.
      But this seems to be insufficient in the light of the stack usage in
      userspace. E.g. glibc uses as large as 64kB alloca() in many commonly
      used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX]
      which is 256kB or stack strings with MAX_ARG_STRLEN.
      
      This will become especially dangerous for suid binaries and the default
      no limit for the stack size limit because those applications can be
      tricked to consume a large portion of the stack and a single glibc call
      could jump over the guard page. These attacks are not theoretical,
      unfortunatelly.
      
      Make those attacks less probable by increasing the stack guard gap
      to 1MB (on systems with 4k pages; but make it depend on the page size
      because systems with larger base pages might cap stack allocations in
      the PAGE_SIZE units) which should cover larger alloca() and VLA stack
      allocations. It is obviously not a full fix because the problem is
      somehow inherent, but it should reduce attack space a lot.
      
      One could argue that the gap size should be configurable from userspace,
      but that can be done later when somebody finds that the new 1MB is wrong
      for some special case applications.  For now, add a kernel command line
      option (stack_guard_gap) to specify the stack gap size (in page units).
      
      Implementation wise, first delete all the old code for stack guard page:
      because although we could get away with accounting one extra page in a
      stack vma, accounting a larger gap can break userspace - case in point,
      a program run with "ulimit -S -v 20000" failed when the 1MB gap was
      counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK
      and strict non-overcommit mode.
      
      Instead of keeping gap inside the stack vma, maintain the stack guard
      gap as a gap between vmas: using vm_start_gap() in place of vm_start
      (or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few
      places which need to respect the gap - mainly arch_get_unmapped_area(),
      and and the vma tree's subtree_gap support for that.
      Original-patch-by: NOleg Nesterov <oleg@redhat.com>
      Original-patch-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Tested-by: Helge Deller <deller@gmx.de> # parisc
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1be7107f
  12. 08 6月, 2017 1 次提交
  13. 13 4月, 2017 1 次提交
  14. 12 4月, 2017 1 次提交
  15. 10 4月, 2017 2 次提交
  16. 28 3月, 2017 2 次提交
    • J
      KVM: MIPS/Emulate: Adapt T&E CACHE emulation for Octeon · 4fa9de5a
      James Hogan 提交于
      Cache management is implemented separately for Cavium Octeon CPUs, so
      r4k_blast_[id]cache aren't available. Instead for Octeon perform a local
      icache flush using local_flush_icache_range(), and for other platforms
      which don't use c-r4k.c use __flush_cache_all() / flush_icache_all().
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: David Daney <david.daney@cavium.com>
      Cc: Andreas Herrmann <andreas.herrmann@caviumnetworks.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      4fa9de5a
    • J
      MIPS: Separate MAAR V bit into VL and VH for XPA · f359a111
      James Hogan 提交于
      The MAAR V bit has been renamed VL since another bit called VH is added
      at the top of the register when it is extended to 64-bits on a 32-bit
      processor with XPA. Rename the V definition, fix the various users, and
      add definitions for the VH bit. Also add a definition for the MAARI
      Index field.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      Cc: Paul Burton <paul.burton@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      f359a111
  17. 22 3月, 2017 2 次提交
    • H
      MIPS: c-r4k: Fix Loongson-3's vcache/scache waysize calculation · 0be032c1
      Huacai Chen 提交于
      If scache.waysize is 0, r4k___flush_cache_all() will do nothing and
      then cause bugs. BTW, though vcache.waysize isn't being used by now,
      we also fix its calculation.
      Signed-off-by: NHuacai Chen <chenhc@lemote.com>
      Cc: John Crispin <john@phrozen.org>
      Cc: Steven J . Hill <Steven.Hill@caviumnetworks.com>
      Cc: Fuxin Zhang <zhangfx@lemote.com>
      Cc: Zhangjin Wu <wuzhangjin@gmail.com>
      Cc: linux-mips@linux-mips.org
      Cc: stable@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/15756/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      0be032c1
    • H
      MIPS: Flush wrong invalid FTLB entry for huge page · 0115f6cb
      Huacai Chen 提交于
      On VTLB+FTLB platforms (such as Loongson-3A R2), FTLB's pagesize is
      usually configured the same as PAGE_SIZE. In such a case, Huge page
      entry is not suitable to write in FTLB.
      
      Unfortunately, when a huge page is created, its page table entries
      haven't created immediately. Then the TLB refill handler will fetch an
      invalid page table entry which has no "HUGE" bit, and this entry may be
      written to FTLB. Since it is invalid, TLB load/store handler will then
      use tlbwi to write the valid entry at the same place. However, the
      valid entry is a huge page entry which isn't suitable for FTLB.
      
      Our solution is to modify build_huge_handler_tail. Flush the invalid
      old entry (whether it is in FTLB or VTLB, this is in order to reduce
      branches) and use tlbwr to write the valid new entry.
      Signed-off-by: NRui Wang <wangr@lemote.com>
      Signed-off-by: NHuacai Chen <chenhc@lemote.com>
      Cc: John Crispin <john@phrozen.org>
      Cc: Steven J . Hill <Steven.Hill@caviumnetworks.com>
      Cc: Fuxin Zhang <zhangfx@lemote.com>
      Cc: Zhangjin Wu <wuzhangjin@gmail.com>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: linux-mips@linux-mips.org
      Cc: stable@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/15754/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      0115f6cb
  18. 02 3月, 2017 3 次提交
  19. 25 2月, 2017 1 次提交
  20. 03 2月, 2017 2 次提交
    • J
      MIPS: Export some tlbex internals for KVM to use · 722b4544
      James Hogan 提交于
      Export to TLB exception code generating functions so that KVM can
      construct a fast TLB refill handler for guest context without
      reinventing the wheel quite so much.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      722b4544
    • J
      MIPS: Export pgd/pmd symbols for KVM · ccf01516
      James Hogan 提交于
      Export pmd_init(), invalid_pmd_table and tlbmiss_handler_setup_pgd to
      GPL kernel modules so that MIPS KVM can use the inline page table
      management functions and switch between page tables:
      
      - pmd_init() will be used directly by KVM to initialise newly allocated
        pmd tables with invalid lower level table pointers.
      
      - invalid_pmd_table is used by pud_present(), pud_none(), and
        pud_clear(), which KVM will use to test and clear pud entries.
      
      - tlbmiss_handler_setup_pgd() will be called by KVM entry code to switch
        to the appropriate GVA page tables.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      ccf01516
  21. 02 2月, 2017 1 次提交
    • J
      MIPS: Move pgd_alloc() out of header · 814f91bf
      James Hogan 提交于
      pgd_alloc() references init_mm which is not exported to modules. In
      order for KVM to be able to use pgd_alloc() to allocate GVA page tables,
      move pgd_alloc() into a new pgtable.c file and export it to modules.
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      814f91bf
  22. 25 1月, 2017 1 次提交
    • B
      treewide: Constify most dma_map_ops structures · 5299709d
      Bart Van Assche 提交于
      Most dma_map_ops structures are never modified. Constify these
      structures such that these can be write-protected. This patch
      has been generated as follows:
      
      git grep -l 'struct dma_map_ops' |
        xargs -d\\n sed -i \
          -e 's/struct dma_map_ops/const struct dma_map_ops/g' \
          -e 's/const struct dma_map_ops {/struct dma_map_ops {/g' \
          -e 's/^const struct dma_map_ops;$/struct dma_map_ops;/' \
          -e 's/const const struct dma_map_ops /const struct dma_map_ops /g';
      sed -i -e 's/const \(struct dma_map_ops intel_dma_ops\)/\1/' \
        $(git grep -l 'struct dma_map_ops intel_dma_ops');
      sed -i -e 's/const \(struct dma_map_ops dma_iommu_ops\)/\1/' \
        $(git grep -l 'struct dma_map_ops' | grep ^arch/powerpc);
      sed -i -e '/^struct vmd_dev {$/,/^};$/ s/const \(struct dma_map_ops[[:blank:]]dma_ops;\)/\1/' \
             -e '/^static void vmd_setup_dma_ops/,/^}$/ s/const \(struct dma_map_ops \*dest\)/\1/' \
             -e 's/const \(struct dma_map_ops \*dest = \&vmd->dma_ops\)/\1/' \
          drivers/pci/host/*.c
      sed -i -e '/^void __init pci_iommu_alloc(void)$/,/^}$/ s/dma_ops->/intel_dma_ops./' arch/ia64/kernel/pci-dma.c
      sed -i -e 's/static const struct dma_map_ops sn_dma_ops/static struct dma_map_ops sn_dma_ops/' arch/ia64/sn/pci/pci_dma.c
      sed -i -e 's/(const struct dma_map_ops \*)//' drivers/misc/mic/bus/vop_bus.c
      Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: x86@kernel.org
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      5299709d