1. 13 8月, 2014 6 次提交
    • A
      powerpc/ppc476: Disable BTAC · 97b3be1e
      Alistair Popple 提交于
      This patch disables the branch target address CAM which under specific
      circumstances may cause the processor to skip execution of 1-4
      instructions. This fixes IBM Erratum #47.
      Signed-off-by: NAlistair Popple <alistair@popple.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      97b3be1e
    • G
      powerpc/powernv: Fix IOMMU group lost · 763fe0ad
      Gavin Shan 提交于
      When we take full hotplug to recover from EEH errors, PCI buses
      could be involved. For the case, the child devices of involved
      PCI buses can't be attached to IOMMU group properly, which is
      caused by commit 3f28c5af ("powerpc/powernv: Reduce multi-hit of
      iommu_add_device()").
      
      When adding the PCI devices of the newly created PCI buses to
      the system, the IOMMU group is expected to be added in (C).
      (A) fails to bind the IOMMU group because bus->is_added is
      false. (B) fails because the device doesn't have binding IOMMU
      table yet. bus->is_added is set to true at end of (C) and
      pdev->is_added is set to true at (D).
      
         pcibios_add_pci_devices()
            pci_scan_bridge()
               pci_scan_child_bus()
                  pci_scan_slot()
                     pci_scan_single_device()
                        pci_scan_device()
                        pci_device_add()
                           pcibios_add_device()           A: Ignore
                           device_add()                   B: Ignore
                        pcibios_fixup_bus()
                           pcibios_setup_bus_devices()
                              pcibios_setup_device()      C: Hit
            pcibios_finish_adding_to_bus()
               pci_bus_add_devices()
                  pci_bus_add_device()                    D: Add device
      
      If the parent PCI bus isn't involved in hotplug, the IOMMU
      group is expected to be bound in (B). (A) should fail as the
      sysfs entries aren't populated.
      
      The patch fixes the issue by reverting commit 3f28c5af and remove
      WARN_ON() in iommu_add_device() to allow calling the function
      even the specified device already has associated IOMMU group.
      
      Cc: <stable@vger.kernel.org>  # 3.16+
      Reported-by: NThadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
      Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com>
      Acked-by: NWei Yang <weiyang@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      763fe0ad
    • M
      powerpc: Add smp_mb()s to arch_spin_unlock_wait() · 78e05b14
      Michael Ellerman 提交于
      Similar to the previous commit which described why we need to add a
      barrier to arch_spin_is_locked(), we have a similar problem with
      spin_unlock_wait().
      
      We need a barrier on entry to ensure any spinlock we have previously
      taken is visibly locked prior to the load of lock->slock.
      
      It's also not clear if spin_unlock_wait() is intended to have ACQUIRE
      semantics. For now be conservative and add a barrier on exit to give it
      ACQUIRE semantics.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      78e05b14
    • M
      powerpc: Add smp_mb() to arch_spin_is_locked() · 51d7d520
      Michael Ellerman 提交于
      The kernel defines the function spin_is_locked(), which can be used to
      check if a spinlock is currently locked.
      
      Using spin_is_locked() on a lock you don't hold is obviously racy. That
      is, even though you may observe that the lock is unlocked, it may become
      locked at any time.
      
      There is (at least) one exception to that, which is if two locks are
      used as a pair, and the holder of each checks the status of the other
      before doing any update.
      
      Assuming *A and *B are two locks, and *COUNTER is a shared non-atomic
      value:
      
      The first CPU does:
      
      	spin_lock(*A)
      
      	if spin_is_locked(*B)
      		# nothing
      	else
      		smp_mb()
      		LOAD r = *COUNTER
      		r++
      		STORE *COUNTER = r
      
      	spin_unlock(*A)
      
      And the second CPU does:
      
      	spin_lock(*B)
      
      	if spin_is_locked(*A)
      		# nothing
      	else
      		smp_mb()
      		LOAD r = *COUNTER
      		r++
      		STORE *COUNTER = r
      
      	spin_unlock(*B)
      
      Although this is a strange locking construct, it should work.
      
      It seems to be understood, but not documented, that spin_is_locked() is
      not a memory barrier, so in the examples above and below the caller
      inserts its own memory barrier before acting on the result of
      spin_is_locked().
      
      For now we assume spin_is_locked() is implemented as below, and we break
      it out in our examples:
      
      	bool spin_is_locked(*LOCK) {
      		LOAD l = *LOCK
      		return l.locked
      	}
      
      Our intuition is that there should be no problem even if the two code
      sequences run simultaneously such as:
      
      	CPU 0			CPU 1
      	==================================================
      	spin_lock(*A)		spin_lock(*B)
      	LOAD b = *B		LOAD a = *A
      	if b.locked # true	if a.locked # true
      	# nothing		# nothing
      	spin_unlock(*A)		spin_unlock(*B)
      
      If one CPU gets the lock before the other then it will do the update and
      the other CPU will back off:
      
      	CPU 0			CPU 1
      	==================================================
      	spin_lock(*A)
      	LOAD b = *B
      				spin_lock(*B)
      	if b.locked # false	LOAD a = *A
      	else			if a.locked # true
      	smp_mb()		# nothing
      	LOAD r1 = *COUNTER	spin_unlock(*B)
      	r1++
      	STORE *COUNTER = r1
      	spin_unlock(*A)
      
      However in reality spin_lock() itself is not indivisible. On powerpc we
      implement it as a load-and-reserve and store-conditional.
      
      Ignoring the retry logic for the lost reservation case, it boils down to:
      	spin_lock(*LOCK) {
      		LOAD l = *LOCK
      		l.locked = true
      		STORE *LOCK = l
      		ACQUIRE_BARRIER
      	}
      
      The ACQUIRE_BARRIER is required to give spin_lock() ACQUIRE semantics as
      defined in memory-barriers.txt:
      
           This acts as a one-way permeable barrier.  It guarantees that all
           memory operations after the ACQUIRE operation will appear to happen
           after the ACQUIRE operation with respect to the other components of
           the system.
      
      On modern powerpc systems we use lwsync for ACQUIRE_BARRIER. lwsync is
      also know as "lightweight sync", or "sync 1".
      
      As described in Power ISA v2.07 section B.2.1.1, in this scenario the
      lwsync is not the barrier itself. It instead causes the LOAD of *LOCK to
      act as the barrier, preventing any loads or stores in the locked region
      from occurring prior to the load of *LOCK.
      
      Whether this behaviour is in accordance with the definition of ACQUIRE
      semantics in memory-barriers.txt is open to discussion, we may switch to
      a different barrier in future.
      
      What this means in practice is that the following can occur:
      
      	CPU 0			CPU 1
      	==================================================
      	LOAD a = *A 		LOAD b = *B
      	a.locked = true		b.locked = true
      	LOAD b = *B		LOAD a = *A
      	STORE *A = a		STORE *B = b
      	if b.locked # false	if a.locked # false
      	else			else
      	smp_mb()		smp_mb()
      	LOAD r1 = *COUNTER	LOAD r2 = *COUNTER
      	r1++			r2++
      	STORE *COUNTER = r1
      				STORE *COUNTER = r2	# Lost update
      	spin_unlock(*A)		spin_unlock(*B)
      
      That is, the load of *B can occur prior to the store that makes *A
      visibly locked. And similarly for CPU 1. The result is both CPUs hold
      their lock and believe the other lock is unlocked.
      
      The easiest fix for this is to add a full memory barrier to the start of
      spin_is_locked(), so adding to our previous definition would give us:
      
      	bool spin_is_locked(*LOCK) {
      		smp_mb()
      		LOAD l = *LOCK
      		return l.locked
      	}
      
      The new barrier orders the store to the lock we are locking vs the load
      of the other lock:
      
      	CPU 0			CPU 1
      	==================================================
      	LOAD a = *A 		LOAD b = *B
      	a.locked = true		b.locked = true
      	STORE *A = a		STORE *B = b
      	smp_mb()		smp_mb()
      	LOAD b = *B		LOAD a = *A
      	if b.locked # true	if a.locked # true
      	# nothing		# nothing
      	spin_unlock(*A)		spin_unlock(*B)
      
      Although the above example is theoretical, there is code similar to this
      example in sem_lock() in ipc/sem.c. This commit in addition to the next
      commit appears to be a fix for crashes we are seeing in that code where
      we believe this race happens in practice.
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      51d7d520
    • G
      powerpc: Fix "attempt to move .org backwards" error · 11d54904
      Guenter Roeck 提交于
      Once again, we see
      
      arch/powerpc/kernel/exceptions-64s.S: Assembler messages:
      arch/powerpc/kernel/exceptions-64s.S:865: Error: attempt to move .org backwards
      arch/powerpc/kernel/exceptions-64s.S:866: Error: attempt to move .org backwards
      arch/powerpc/kernel/exceptions-64s.S:890: Error: attempt to move .org backwards
      
      when compiling ppc:allmodconfig.
      
      This time the problem has been caused by to commit 0869b6fd
      ("powerpc/book3s: Add basic infrastructure to handle HMI in Linux"),
      which adds functions hmi_exception_early and hmi_exception_after_realmode
      into a critical (size-limited) code area, even though that does not appear
      to be necessary.
      
      Move those functions to a non-critical area of the file.
      Signed-off-by: NGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      11d54904
    • S
      powerpc/nohash: Split __early_init_mmu() into boot and secondary · 5d61a217
      Scott Wood 提交于
      __early_init_mmu() does some things that are really only needed by the
      boot cpu.  On FSL booke, This includes calling
      memblock_enforce_memory_limit(), which is labelled __init.  Secondary
      cpu init code can't be __init as that would break CPU hotplug.
      
      While it's probably a bug that memblock_enforce_memory_limit() isn't
      __init_memblock instead, there's no reason why we should be doing this
      stuff for secondary cpus in the first place.
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      5d61a217
  2. 09 8月, 2014 28 次提交
    • S
      ARM: dts: exynos5420: remove disp_pd · ee34fb97
      Stephen Rothwell 提交于
      This was caused by commit 5a8da524 ("ARM: dts: exynos5420: add dsi
      node"), which conflicted with d51cad7d ("ARM: dts: remove display
      power domain for exynos5420").
      
      The DTS addition should never have been merged through the DRM tree in
      the first place, and it lacked an ack from the platform maintainer
      (who would have known that the disp_pd reference got removed).
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NOlof Johansson <olof@lixom.net>
      ee34fb97
    • T
      ARM: EXYNOS: Fix suspend/resume sequences · 01601b34
      Tomasz Figa 提交于
      Due to recent consolidation of Exynos suspend and cpuidle code, some
      parts of suspend and resume sequences are executed two times, once from
      exynos_pm_syscore_ops and then from exynos_cpu_pm_notifier() and thus it
      breaks suspend, at least on Exynos4-based boards. In addition, simple
      core power down from a cpuidle driver could, in case of CPU 0 could
      result in calling functions that are specific to suspend and deeper idle
      states.
      
      This patch fixes the issue by moving those operations outside the CPU PM
      notifier into suspend and AFTR code paths. This leads to a bit of code
      duplication, but allows additional code simplification, so in the end
      more code is removed than added.
      
      Fixes: 85f9f908 ("ARM: EXYNOS: Use the cpu_pm notifier for pm")
      Cc: Kukjin Kim <kgene.kim@samsung.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Olof Johansson <olof@lixom.net>
      Cc: arm@kernel.org
      Signed-off-by: NTomasz Figa <t.figa@samsung.com>
      [b.zolnierkie: ported patch over current changes]
      [b.zolnierkie: fixed exynos_aftr_finisher() return value]
      Signed-off-by: NBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
      Signed-off-by: NOlof Johansson <olof@lixom.net>
      01601b34
    • D
      ARM: dts: Fix the sort ordering of EHCI and HSIC in rk3288.dtsi · c9c32c50
      Doug Anderson 提交于
      The EHCI and HSIC device tree nodes were added in the wrong place.
      Fix them.
      Signed-off-by: NDoug Anderson <dianders@chromium.org>
      Signed-off-by: NKever Yang <kever.yang@rock-chips.com>
      Signed-off-by: NHeiko Stuebner <heiko@sntech.de>
      Signed-off-by: NOlof Johansson <olof@lixom.net>
      c9c32c50
    • R
      ARM: wire up memfd_create syscall · e57e4193
      Russell King 提交于
      Add the memfd_create syscall to ARM.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      e57e4193
    • R
      ARM: wire up getrandom syscall · eb645253
      Russell King 提交于
      Add the new getrandom syscall for ARM.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      eb645253
    • K
      ARM: 8114/1: LPAE: load upper bits of early TTBR0/TTBR1 · 7fb00c2f
      Konstantin Khlebnikov 提交于
      This patch fixes booting when idmap pgd lays above 4gb. Commit
      4756dcbf mostly had fixed this, but it'd failed to load upper bits.
      
      Also this fixes adding TTBR1_OFFSET to TTRR1: if lower part overflows
      carry flag must be added to the upper part.
      Signed-off-by: NKonstantin Khlebnikov <k.khlebnikov@samsung.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      7fb00c2f
    • V
      kexec: verify the signature of signed PE bzImage · 8e7d8381
      Vivek Goyal 提交于
      This is the final piece of the puzzle of verifying kernel image signature
      during kexec_file_load() syscall.
      
      This patch calls into PE file routines to verify signature of bzImage.  If
      signature are valid, kexec_file_load() succeeds otherwise it fails.
      
      Two new config options have been introduced.  First one is
      CONFIG_KEXEC_VERIFY_SIG.  This option enforces that kernel has to be
      validly signed otherwise kernel load will fail.  If this option is not
      set, no signature verification will be done.  Only exception will be when
      secureboot is enabled.  In that case signature verification should be
      automatically enforced when secureboot is enabled.  But that will happen
      when secureboot patches are merged.
      
      Second config option is CONFIG_KEXEC_BZIMAGE_VERIFY_SIG.  This option
      enables signature verification support on bzImage.  If this option is not
      set and previous one is set, kernel image loading will fail because kernel
      does not have support to verify signature of bzImage.
      
      I tested these patches with both "pesign" and "sbsign" signed bzImages.
      
      I used signing_key.priv key and signing_key.x509 cert for signing as
      generated during kernel build process (if module signing is enabled).
      
      Used following method to sign bzImage.
      
      pesign
      ======
      - Convert DER format cert to PEM format cert
      openssl x509 -in signing_key.x509 -inform DER -out signing_key.x509.PEM -outform
      PEM
      
      - Generate a .p12 file from existing cert and private key file
      openssl pkcs12 -export -out kernel-key.p12 -inkey signing_key.priv -in
      signing_key.x509.PEM
      
      - Import .p12 file into pesign db
      pk12util -i /tmp/kernel-key.p12 -d /etc/pki/pesign
      
      - Sign bzImage
      pesign -i /boot/vmlinuz-3.16.0-rc3+ -o /boot/vmlinuz-3.16.0-rc3+.signed.pesign
      -c "Glacier signing key - Magrathea" -s
      
      sbsign
      ======
      sbsign --key signing_key.priv --cert signing_key.x509.PEM --output
      /boot/vmlinuz-3.16.0-rc3+.signed.sbsign /boot/vmlinuz-3.16.0-rc3+
      
      Patch details:
      
      Well all the hard work is done in previous patches.  Now bzImage loader
      has just call into that code and verify whether bzImage signature are
      valid or not.
      
      Also create two config options.  First one is CONFIG_KEXEC_VERIFY_SIG.
      This option enforces that kernel has to be validly signed otherwise kernel
      load will fail.  If this option is not set, no signature verification will
      be done.  Only exception will be when secureboot is enabled.  In that case
      signature verification should be automatically enforced when secureboot is
      enabled.  But that will happen when secureboot patches are merged.
      
      Second config option is CONFIG_KEXEC_BZIMAGE_VERIFY_SIG.  This option
      enables signature verification support on bzImage.  If this option is not
      set and previous one is set, kernel image loading will fail because kernel
      does not have support to verify signature of bzImage.
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Matthew Garrett <mjg59@srcf.ucam.org>
      Cc: Greg Kroah-Hartman <greg@kroah.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: WANG Chao <chaowang@redhat.com>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Matt Fleming <matt@console-pimps.org>
      Cc: David Howells <dhowells@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8e7d8381
    • V
      kexec: support kexec/kdump on EFI systems · 6a2c20e7
      Vivek Goyal 提交于
      This patch does two things.  It passes EFI run time mappings to second
      kernel in bootparams efi_info.  Second kernel parse this info and create
      new mappings in second kernel.  That means mappings in first and second
      kernel will be same.  This paves the way to enable EFI in kexec kernel.
      
      This patch also prepares and passes EFI setup data through bootparams.
      This contains bunch of information about various tables and their
      addresses.
      
      These information gathering and passing has been written along the lines
      of what current kexec-tools is doing to make kexec work with UEFI.
      
      [akpm@linux-foundation.org: s/get_efi/efi_get/g, per Matt]
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Matthew Garrett <mjg59@srcf.ucam.org>
      Cc: Greg Kroah-Hartman <greg@kroah.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: WANG Chao <chaowang@redhat.com>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Matt Fleming <matt@console-pimps.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6a2c20e7
    • V
      kexec: support for kexec on panic using new system call · dd5f7260
      Vivek Goyal 提交于
      This patch adds support for loading a kexec on panic (kdump) kernel usning
      new system call.
      
      It prepares ELF headers for memory areas to be dumped and for saved cpu
      registers.  Also prepares the memory map for second kernel and limits its
      boot to reserved areas only.
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Matthew Garrett <mjg59@srcf.ucam.org>
      Cc: Greg Kroah-Hartman <greg@kroah.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: WANG Chao <chaowang@redhat.com>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dd5f7260
    • V
      kexec-bzImage64: support for loading bzImage using 64bit entry · 27f48d3e
      Vivek Goyal 提交于
      This is loader specific code which can load bzImage and set it up for
      64bit entry.  This does not take care of 32bit entry or real mode entry.
      
      32bit mode entry can be implemented if somebody needs it.
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Matthew Garrett <mjg59@srcf.ucam.org>
      Cc: Greg Kroah-Hartman <greg@kroah.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: WANG Chao <chaowang@redhat.com>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      27f48d3e
    • V
      kexec: load and relocate purgatory at kernel load time · 12db5562
      Vivek Goyal 提交于
      Load purgatory code in RAM and relocate it based on the location.
      Relocation code has been inspired by module relocation code and purgatory
      relocation code in kexec-tools.
      
      Also compute the checksums of loaded kexec segments and store them in
      purgatory.
      
      Arch independent code provides this functionality so that arch dependent
      bootloaders can make use of it.
      
      Helper functions are provided to get/set symbol values in purgatory which
      are used by bootloaders later to set things like stack and entry point of
      second kernel etc.
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Matthew Garrett <mjg59@srcf.ucam.org>
      Cc: Greg Kroah-Hartman <greg@kroah.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: WANG Chao <chaowang@redhat.com>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      12db5562
    • V
      purgatory: core purgatory functionality · 8fc5b4d4
      Vivek Goyal 提交于
      Create a stand alone relocatable object purgatory which runs between two
      kernels.  This name, concept and some code has been taken from
      kexec-tools.  Idea is that this code runs after a crash and it runs in
      minimal environment.  So keep it separate from rest of the kernel and in
      long term we will have to practically do no maintenance of this code.
      
      This code also has the logic to do verify sha256 hashes of various
      segments which have been loaded into memory.  So first we verify that the
      kernel we are jumping to is fine and has not been corrupted and make
      progress only if checsums are verified.
      
      This code also takes care of copying some memory contents to backup region.
      
      [sfr@canb.auug.org.au: run host built programs from objtree]
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Matthew Garrett <mjg59@srcf.ucam.org>
      Cc: Greg Kroah-Hartman <greg@kroah.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: WANG Chao <chaowang@redhat.com>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8fc5b4d4
    • V
      purgatory/sha256: provide implementation of sha256 in purgaotory context · daeba064
      Vivek Goyal 提交于
      Next two patches provide code for purgatory.  This is a code which does
      not link against the kernel and runs stand alone.  This code runs between
      two kernels.  One of the primary purpose of this code is to verify the
      digest of newly loaded kernel and making sure it matches the digest
      computed at kernel load time.
      
      We use sha256 for calculating digest of kexec segmetns.  Purgatory can't
      use stanard crypto API as that API is not available in purgatory context.
      
      Hence, I have copied code from crypto/sha256_generic.c and compiled it
      with purgaotry code so that it could be used.  I could not #include
      sha256_generic.c file here as some of the function signature requiered
      little tweaking.  Original functions work with crypto API but these ones
      don't
      
      So instead of doing #include on sha256_generic.c I just copied relevant
      portions of code into arch/x86/purgatory/sha256.c.  Now we shouldn't have
      to touch this code at all.  Do let me know if there are better ways to
      handle it.
      
      This patch does not enable compiling of this code.  That happens in next
      patch.  I wanted to highlight this change in a separate patch for easy
      review.
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Matthew Garrett <mjg59@srcf.ucam.org>
      Cc: Greg Kroah-Hartman <greg@kroah.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: WANG Chao <chaowang@redhat.com>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      daeba064
    • V
      kexec: implementation of new syscall kexec_file_load · cb105258
      Vivek Goyal 提交于
      Previous patch provided the interface definition and this patch prvides
      implementation of new syscall.
      
      Previously segment list was prepared in user space.  Now user space just
      passes kernel fd, initrd fd and command line and kernel will create a
      segment list internally.
      
      This patch contains generic part of the code.  Actual segment preparation
      and loading is done by arch and image specific loader.  Which comes in
      next patch.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Matthew Garrett <mjg59@srcf.ucam.org>
      Cc: Greg Kroah-Hartman <greg@kroah.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: WANG Chao <chaowang@redhat.com>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cb105258
    • V
      kexec: new syscall kexec_file_load() declaration · f0895685
      Vivek Goyal 提交于
      This is the new syscall kexec_file_load() declaration/interface.  I have
      reserved the syscall number only for x86_64 so far.  Other architectures
      (including i386) can reserve syscall number when they enable the support
      for this new syscall.
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Matthew Garrett <mjg59@srcf.ucam.org>
      Cc: Greg Kroah-Hartman <greg@kroah.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: WANG Chao <chaowang@redhat.com>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f0895685
    • V
      kernel: build bin2c based on config option CONFIG_BUILD_BIN2C · de5b56ba
      Vivek Goyal 提交于
      currently bin2c builds only if CONFIG_IKCONFIG=y. But bin2c will now be
      used by kexec too.  So make it compilation dependent on CONFIG_BUILD_BIN2C
      and this config option can be selected by CONFIG_KEXEC and CONFIG_IKCONFIG.
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: Eric Biederman <ebiederm@xmission.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Matthew Garrett <mjg59@srcf.ucam.org>
      Cc: Greg Kroah-Hartman <greg@kroah.com>
      Cc: Dave Young <dyoung@redhat.com>
      Cc: WANG Chao <chaowang@redhat.com>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      de5b56ba
    • D
      shm: add memfd_create() syscall · 9183df25
      David Herrmann 提交于
      memfd_create() is similar to mmap(MAP_ANON), but returns a file-descriptor
      that you can pass to mmap().  It can support sealing and avoids any
      connection to user-visible mount-points.  Thus, it's not subject to quotas
      on mounted file-systems, but can be used like malloc()'ed memory, but with
      a file-descriptor to it.
      
      memfd_create() returns the raw shmem file, so calls like ftruncate() can
      be used to modify the underlying inode.  Also calls like fstat() will
      return proper information and mark the file as regular file.  If you want
      sealing, you can specify MFD_ALLOW_SEALING.  Otherwise, sealing is not
      supported (like on all other regular files).
      
      Compared to O_TMPFILE, it does not require a tmpfs mount-point and is not
      subject to a filesystem size limit.  It is still properly accounted to
      memcg limits, though, and to the same overcommit or no-overcommit
      accounting as all user memory.
      Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com>
      Acked-by: NHugh Dickins <hughd@google.com>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Ryan Lortie <desrt@desrt.ca>
      Cc: Lennart Poettering <lennart@poettering.net>
      Cc: Daniel Mack <zonque@gmail.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9183df25
    • D
      arch/x86: replace strict_strto calls · 164109e3
      Daniel Walter 提交于
      Replace obsolete strict_strto calls with appropriate kstrto calls
      Signed-off-by: NDaniel Walter <dwalter@google.com>
      Acked-by: NBorislav Petkov <bp@suse.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      164109e3
    • D
      arch/powerpc: replace obsolete strict_strto* calls · 1618bd53
      Daniel Walter 提交于
      Replace strict_strto calls with more appropriate kstrto calls
      Signed-off-by: NDaniel Walter <dwalter@google.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1618bd53
    • D
      arch/arm/mach-w90x900/cpu.c: replace obsolete strict_strto · 37028623
      Daniel Walter 提交于
      Replace obsolete strict_strto with kstrto calls
      Signed-off-by: NDaniel Walter <dwalter@google.com>
      Cc: Wan ZongShun <mcuos.com@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      37028623
    • D
      arch/arm/mach-s3c24xx/mach-jive.c: replace strict_strto* with kstrto* · 5c2432cb
      Daniel Walter 提交于
      Replace obsolete strict_strto call with kstrto
      Signed-off-by: NDaniel Walter <dwalter@google.com>
      Cc: Ben Dooks <ben-linux@fluff.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5c2432cb
    • D
      arch/arm/mach-pxa: replace strict_strto call with kstrto · 4fce45b4
      Daniel Walter 提交于
      Replace obsolete call to strict_strto with kstrto
      Signed-off-by: NDaniel Walter <dwalter@google.com>
      Cc: Eric Miao <eric.y.miao@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4fce45b4
    • D
      arch/arm/mach-omap2: replace strict_strto* with kstrto* · 056610ac
      Daniel Walter 提交于
      Replace obsolete strict_strto call with kstrto calls.
      Simplify copy_from_user/strict_strto by using kstrto_from_user
      Signed-off-by: NDaniel Walter <dwalter@google.com>
      Cc: Tony Lindgren <tony@atomide.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      056610ac
    • A
      arm64,ia64,ppc,s390,sh,tile,um,x86,mm: remove default gate area · a6c19dfe
      Andy Lutomirski 提交于
      The core mm code will provide a default gate area based on
      FIXADDR_USER_START and FIXADDR_USER_END if
      !defined(__HAVE_ARCH_GATE_AREA) && defined(AT_SYSINFO_EHDR).
      
      This default is only useful for ia64.  arm64, ppc, s390, sh, tile, 64-bit
      UML, and x86_32 have their own code just to disable it.  arm, 32-bit UML,
      and x86_64 have gate areas, but they have their own implementations.
      
      This gets rid of the default and moves the code into ia64.
      
      This should save some code on architectures without a gate area: it's now
      possible to inline the gate_area functions in the default case.
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Acked-by: NNathan Lynch <nathan_lynch@mentor.com>
      Acked-by: NH. Peter Anvin <hpa@linux.intel.com>
      Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [in principle]
      Acked-by: Richard Weinberger <richard@nod.at> [for um]
      Acked-by: Will Deacon <will.deacon@arm.com> [for arm64]
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Nathan Lynch <Nathan_Lynch@mentor.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a6c19dfe
    • L
      lib/scatterlist: clean up useless architecture versions of scatterlist.h · 56106da7
      Laura Abbott 提交于
      There's no need to have an architecture version of scatterlist.h if the
      only thing the file does is include asm-generic/scatterlist.h.  Switch to
      the asm-generic versions directly.
      Acked-by: NJesper Nilsson <jesper.nilsson@axis.com>
      Acked-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NLaura Abbott <lauraa@codeaurora.org>
      Cc: Lennox Wu <lennox.wu@gmail.com>
      Cc: Chen Liqin <liqin.linux@gmail.com>,
      Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Hirokazu Takata <takata@linux-m32r.org>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "James E.J. Bottomley" <JBottomley@parallels.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      56106da7
    • L
      lib/scatterlist: make ARCH_HAS_SG_CHAIN an actual Kconfig · 308c09f1
      Laura Abbott 提交于
      Rather than have architectures #define ARCH_HAS_SG_CHAIN in an
      architecture specific scatterlist.h, make it a proper Kconfig option and
      use that instead.  At same time, remove the header files are are now
      mostly useless and just include asm-generic/scatterlist.h.
      
      [sfr@canb.auug.org.au: powerpc files now need asm/dma.h]
      Signed-off-by: NLaura Abbott <lauraa@codeaurora.org>
      Acked-by: Thomas Gleixner <tglx@linutronix.de>			[x86]
      Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>	[powerpc]
      Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "James E.J. Bottomley" <JBottomley@parallels.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      308c09f1
    • A
    • M
      rtc: ia64: allow other architectures to use EFI RTC · da167ad7
      Mark Salter 提交于
      Currently, the rtc-efi driver is restricted to ia64 only.  Newer
      architectures with EFI support may want to also use that driver.  This
      patch moves the platform device setup from ia64 into drivers/rtc and
      allow any architecture with CONFIG_EFI=y to use the rtc-efi driver.
      Signed-off-by: NMark Salter <msalter@redhat.com>
      Cc: Alessandro Zummo <a.zummo@towertech.it>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      da167ad7
  3. 08 8月, 2014 3 次提交
  4. 07 8月, 2014 3 次提交