1. 21 3月, 2016 4 次提交
    • C
      kvm: arm64: Disable compiler instrumentation for hypervisor code · a6cdf1c0
      Catalin Marinas 提交于
      With the recent rewrite of the arm64 KVM hypervisor code in C, enabling
      certain options like KASAN would allow the compiler to generate memory
      accesses or function calls to addresses not mapped at EL2. This patch
      disables the compiler instrumentation on the arm64 hypervisor code for
      gcov-based profiling (GCOV_KERNEL), undefined behaviour sanity checker
      (UBSAN) and kernel address sanitizer (KASAN).
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Christoffer Dall <christoffer.dall@linaro.org>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: <stable@vger.kernel.org> # 4.5+
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      a6cdf1c0
    • M
      arm64: KVM: Turn kvm_ksym_ref into a NOP on VHE · 2510ffe1
      Marc Zyngier 提交于
      When running with VHE, there is no need to translate kernel pointers
      to the EL2 memory space, since we're already there (and we have a much
      saner memory map to start with).
      
      Unfortunately, kvm_ksym_ref is getting in the way, and the first
      call into the "hypervisor" section is going to end up in fireworks,
      since we're now branching into nowhereland. Meh.
      
      A potential solution is to test if VHE is engaged or not, and only
      perform the translation in the negative case. With this in place,
      VHE is able to run again.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      2510ffe1
    • E
      KVM: arm/arm64: disable preemption when calling smp_call_function_many · 898f949f
      Eric Auger 提交于
      Preemption must be disabled when calling smp_call_function_many
      
      Reported-by: bartosz.wawrzyniak@tieto.com
      Signed-off-by: NEric Auger <eric.auger@linaro.org>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      898f949f
    • A
      x86/kallsyms: fix GOLD link failure with new relative kallsyms table format · 142b9e6c
      Ard Biesheuvel 提交于
      Commit 2213e9a6 ("kallsyms: add support for relative offsets in
      kallsyms address table") changed the default kallsyms symbol table
      format to use relative references rather than absolute addresses.
      
      This reduces the size of the kallsyms symbol table by 50% on 64-bit
      architectures, and further reduces the size of the relocation tables
      used by relocatable kernels.  Since the memory footprint of the static
      kernel image is always much smaller than 4 GB, these relative references
      are assumed to be representable in 32 bits, even when the native word
      size is 64 bits.
      
      On 64-bit architectures, this obviously only works if the distance
      between each relative reference and the chosen anchor point is
      representable in 32 bits, and so the table generation code in
      scripts/kallsyms.c scans the table for the lowest value that is covered
      by the kernel text, and selects it as the anchor point.
      
      However, when using the GOLD linker rather than the default BFD linker
      to build the x86_64 kernel, the symbol phys_offset_64, which is the
      result of arithmetic defined in the linker script, is emitted as a 'T'
      rather than an 'A' type symbol, resulting in scripts/kallsyms.c to
      mistake it for a suitable anchor point, even though it is far away from
      the actual kernel image in the virtual address space.  This results in
      out-of-range warnings from scripts/kallsyms.c and a broken build.
      
      So let's align with the BFD linker, and emit the phys_offset_[32|64]
      symbols as absolute symbols explicitly.  Note that the out of range
      issue does not exist on 32-bit x86, but this patch changes both symbols
      for symmetry.
      Reported-by: NMarkus Trippelsdorf <markus@trippelsdorf.de>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      142b9e6c
  2. 19 3月, 2016 3 次提交
    • A
      ldmvsw: Add ldmvsw.c driver code · 5d01fa0c
      Aaron Young 提交于
        Add ldmvsw.c driver
      
        Details:
      
        The ldmvsw driver very closely follows the sunvnet.c code and makes
        use of the sunvnet_common.c code for core functionality.
      
        A significant difference between sunvnet and ldmvsw driver is
        sunvnet creates a network interface for each vnet-port *parent*
        node in the MD while the ldmvsw driver creates a network interface
        for every vsw-port node in the Machine Description (MD).
        Therefore the netdev_priv() for sunvnet is a vnet structure while
        the netdev_priv() for ldmvsw is a vnet_port structure.
      
        Vnet_port structures allocated by ldmvsw have the vsw bit set.
        When finding the net_device associated with a port, the common code keys
        off this bit to use either the net_device found in the vnet_port or the
        net_device in the vnet structure (see the VNET_PORT_TO_NET_DEVICE() macro in
        sunvnet_common.h). This scheme allows the common code to work with
        both drivers with minimal changes.
      
        Similar to Xen, network interfaces created by the ldmvsw driver will always
        have a HW Addr (i.e. mac address) of FE:FF:FF:FF:FF:FF and each will be
        assigned the devname "vif<cfg_handle>.<port_id>" - where <cfg_handle> and
        <port_id> are a unique handle/port pair assigned to the associated
        vsw-port node in the MD.
      Signed-off-by: NAaron Young <aaron.young@oracle.com>
      Signed-off-by: NRashmi Narasimhan <rashmi.narasimhan@oracle.com>
      Reviewed-by: NSowmini Varadhan <sowmini.varadhan@oracle.com>
      Reviewed-by: NAlexandre Chartre <Alexandre.Chartre@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5d01fa0c
    • M
      ARM: uniphier: rework SMP code to support new System Bus binding · 307d40c5
      Masahiro Yamada 提交于
      During the review process of the UniPhier System Bus driver
      (drivers/bus/uniphier.c), the current binding of the System Bus
      Controller turned out to be no good.  In order to use the driver,
      some nodes in the device trees must be tweaked.  It would also have
      impacts on the SMP code because the SMP related registers are
      located in the System Bus Controller block.  This commit reworks
      the smp_operations to support the new binding, but still supports
      the old binding, too.
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      307d40c5
    • M
      ARM: uniphier: add missing of_node_put() · 9eca796e
      Masahiro Yamada 提交于
      This node pointer is allocated by of_find_compatible_node() in this
      function.  It should be put before exitting this function.
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      9eca796e
  3. 18 3月, 2016 23 次提交
  4. 17 3月, 2016 3 次提交
  5. 16 3月, 2016 7 次提交
    • M
      x86/mm/pat: Fix boot crash when 1GB pages are not supported by the CPU · d367cef0
      Matt Fleming 提交于
      Scott reports that with the new separate EFI page tables he's seeing
      the following error on boot, caused by setting reserved bits in the
      page table structures (fault code is PF_RSVD | PF_PROT),
      
        swapper/0: Corrupted page table at address 17b102020
        PGD 17b0e5063 PUD 1400000e3
        Bad pagetable: 0009 [#1] SMP
      
      On first inspection the PUD is using a 1GB page size (_PAGE_PSE) and
      looks fine but that's only true if support for 1GB PUD pages
      ("pdpe1gb") is present in the CPU.
      
      Scott's Intel Celeron N2820 does not have that feature and so the
      _PAGE_PSE bit is reserved. Fix this issue by making the 1GB mapping
      code in conditional on "cpu_has_gbpages".
      
      This issue didn't come up in the past because the required mapping for
      the faulting address (0x17b102020) will already have been setup by the
      kernel in early boot before we got to efi_map_regions(), but we no
      longer use the standard kernel page tables during EFI calls.
      Reported-by: NScott Ashcroft <scott.ashcroft@talk21.com>
      Tested-by: NScott Ashcroft <scott.ashcroft@talk21.com>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Acked-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Cc: Matthew Garrett <mjg59@srcf.ucam.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Raphael Hertzog <hertzog@debian.org>
      Cc: Roger Shimizu <rogershimizu@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-efi@vger.kernel.org
      Link: http://lkml.kernel.org/r/1457951581-27353-2-git-send-email-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d367cef0
    • C
      powerpc: Fix unrecoverable SLB miss during restore_math() · 6e669f08
      Cyril Bur 提交于
      Commit 70fe3d98 "powerpc: Restore FPU/VEC/VSX if previously used" introduces a
      call to restore_math() late in the syscall return path, after MSR_RI has been
      cleared. The MSR_RI flag is used to indicate whether the kernel can take
      another exception or not. A cleared MSR_RI flag indicates that the kernel
      cannot.
      
      Unfortunately when a machine is under SLB pressure an SLB miss can occur
      in restore_math() which (with MSR_RI cleared) leads to an unrecoverable
      exception.
      
        Unrecoverable exception 4100 at c0000000000088d8
        cpu 0x0: Vector: 4100  at [c0000003fa473b20]
            pc: c0000000000088d8: .load_vr_state+0x70/0x110
            lr: c00000000000f710: .restore_math+0x130/0x188
            sp: c0000003fa473da0
           msr: 9000000002003030
          current = 0xc0000007f876f180
          paca    = 0xc00000000fff0000	 softe: 0	 irq_happened: 0x01
            pid   = 1944, comm = K08umountfs
        [link register   ] c00000000000f710 .restore_math+0x130/0x188
        [c0000003fa473da0] c0000003fa473e30 (unreliable)
        [c0000003fa473e30] c000000000007b6c system_call+0x84/0xfc
      
      The clearing of MSR_RI is actually an optimisation to avoid multiple MSR
      writes, what must be disabled are interrupts. See comment in entry_64.S:
      
        /*
         * For performance reasons we clear RI the same time that we
         * clear EE. We only need to clear RI just before we restore r13
         * below, but batching it with EE saves us one expensive mtmsrd call.
         * We have to be careful to restore RI if we branch anywhere from
         * here (eg syscall_exit_work).
         */
      
      At the point of calling restore_math() r13 has not been restored, as such, the
      quick fix of turning MSR_RI back on for the call to restore_math() will
      eliminate the occurrence of an unrecoverable exception.
      
      We'd like to do a better fix in future.
      
      Fixes: 70fe3d98 ("powerpc: Restore FPU/VEC/VSX if previously used")
      Signed-off-by: NCyril Bur <cyrilbur@gmail.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      6e669f08
    • C
      powerpc/8xx: Fix do_mtspr_cpu6() build on older compilers · 2e098dce
      Christophe Leroy 提交于
      GCC < 4.9 is unable to build this, saying:
      
        arch/powerpc/mm/8xx_mmu.c:139:2: error: memory input 1 is not directly addressable
      
      Change the one-element array into a simple variable to avoid this.
      
      Fixes: 1458dd95 ("powerpc/8xx: Handle CPU6 ERRATA directly in mtspr() macro")
      Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr>
      Cc: Scott Wood <oss@buserror.net>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      2e098dce
    • M
      powerpc/rcpm: Fix build break when SMP=n · b081251e
      Michael Ellerman 提交于
      Add an include of asm/smp.h to fix a build break when SMP=n:
      
        arch/powerpc/sysdev/fsl_rcpm.c:32:2: error: implicit declaration of
        function 'get_hard_smp_processor_id'
      
      Fixes: d17799f9 ("powerpc/rcpm: add RCPM driver")
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      b081251e
    • S
      powerpc/book3e-64: Use hardcoded mttmr opcode · 7a25d912
      Scott Wood 提交于
      This preserves the ability to build using older binutils (reportedly <=
      2.22).
      
      Fixes: 6becef7e ("powerpc/mpc85xx: Add CPU hotplug support for E6500")
      Signed-off-by: NScott Wood <oss@buserror.net>
      Cc: chenhui.zhao@freescale.com
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      7a25d912
    • A
      ARM: pxa/raumfeld: use PROPERTY_ENTRY_INTEGER to define props · 4d2508a5
      Arnd Bergmann 提交于
      gcc-6.0 notices that the use of the property_entry in this file that
      was recently introduced cannot work right, as we initialize the wrong
      field:
      
      raumfeld.c:387:3: error: the address of 'raumfeld_rotary_encoder_steps' will always evaluate as 'true' [-Werror=address]
         DEV_PROP_U32, 1, &raumfeld_rotary_encoder_steps, },
         ^~~~~~~~~~~~
      raumfeld.c:389:3: error: the address of 'raumfeld_rotary_encoder_axis' will always evaluate as 'true' [-Werror=address]
         DEV_PROP_U32, 1, &raumfeld_rotary_encoder_axis, },
         ^~~~~~~~~~~~
      raumfeld.c:391:3: error: the address of 'raumfeld_rotary_encoder_relative_axis' will always evaluate as 'true' [-Werror=address]
         DEV_PROP_U32, 1, &raumfeld_rotary_encoder_relative_axis, },
         ^~~~~~~~~~~~
      
      The problem appears to stem from relying on an old definition of
      'struct property', but it has changed several times since the code
      could have last been correct.
      
      This changes the code to use the PROPERTY_ENTRY_INTEGER() macro instead,
      which works fine for the current definition and is a safer way of doing
      the initialization.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Fixes: a9e340dc ("Input: rotary_encoder - move away from platform data structure")
      Signed-off-by: NDmitry Torokhov <dmitry.torokhov@gmail.com>
      4d2508a5
    • C
      x86: also use debug_pagealloc_enabled() for free_init_pages · a75e1f63
      Christian Borntraeger 提交于
      we want to couple all debugging features with debug_pagealloc_enabled()
      and not with the config option CONFIG_DEBUG_PAGEALLOC.
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Suggested-by: NDavid Rientjes <rientjes@google.com>
      Acked-by: NDavid Rientjes <rientjes@google.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a75e1f63