- 11 9月, 2015 6 次提交
-
-
由 Christoph Hellwig 提交于
Almost everyone implements dma_set_mask the same way, although some time that's hidden in ->set_dma_mask methods. This patch consolidates those into a common implementation that either calls ->set_dma_mask if present or otherwise uses the default implementation. Some architectures used to only call ->set_dma_mask after the initial checks, and those instance have been fixed to do the full work. h8300 implemented dma_set_mask bogusly as a no-ops and has been fixed. Unfortunately some architectures overload unrelated semantics like changing the dma_ops into it so we still need to allow for an architecture override for now. [jcmvbkbc@gmail.com: fix xtensa] Signed-off-by: NChristoph Hellwig <hch@lst.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Michal Simek <monstr@monstr.eu> Cc: Jonas Bonn <jonas@southpole.se> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Christoph Hellwig 提交于
Most architectures just call into ->dma_supported, but some also return 1 if the method is not present, or 0 if no dma ops are present (although that should never happeb). Consolidate this more broad version into common code. Also fix h8300 which inorrectly always returned 0, which would have been a problem if it's dma_set_mask implementation wasn't a similarly buggy noop. As a few architectures have much more elaborate implementations, we still allow for arch overrides. [jcmvbkbc@gmail.com: fix xtensa] Signed-off-by: NChristoph Hellwig <hch@lst.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Michal Simek <monstr@monstr.eu> Cc: Jonas Bonn <jonas@southpole.se> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Christoph Hellwig 提交于
Currently there are three valid implementations of dma_mapping_error: (1) call ->mapping_error (2) check for a hardcoded error code (3) always return 0 This patch provides a common implementation that calls ->mapping_error if present, then checks for DMA_ERROR_CODE if defined or otherwise returns 0. [jcmvbkbc@gmail.com: fix xtensa] Signed-off-by: NChristoph Hellwig <hch@lst.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Michal Simek <monstr@monstr.eu> Cc: Jonas Bonn <jonas@southpole.se> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Christoph Hellwig 提交于
Most architectures do not support non-coherent allocations and either define dma_{alloc,free}_noncoherent to their coherent versions or stub them out. Openrisc uses dma_{alloc,free}_attrs to implement them, and only Mips implements them directly. This patch moves the Openrisc version to common code, and handles the DMA_ATTR_NON_CONSISTENT case in the mips dma_map_ops instance. Note that actual non-coherent allocations require a dma_cache_sync implementation, so if non-coherent allocations didn't work on an architecture before this patch they still won't work after it. [jcmvbkbc@gmail.com: fix xtensa] Signed-off-by: NChristoph Hellwig <hch@lst.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Michal Simek <monstr@monstr.eu> Cc: Jonas Bonn <jonas@southpole.se> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Christoph Hellwig 提交于
Since 2009 we have a nice asm-generic header implementing lots of DMA API functions for architectures using struct dma_map_ops, but unfortunately it's still missing a lot of APIs that all architectures still have to duplicate. This series consolidates the remaining functions, although we still need arch opt outs for two of them as a few architectures have very non-standard implementations. This patch (of 5): The coherent DMA allocator works the same over all architectures supporting dma_map operations. This patch consolidates them and converges the minor differences: - the debug_dma helpers are now called from all architectures, including those that were previously missing them - dma_alloc_from_coherent and dma_release_from_coherent are now always called from the generic alloc/free routines instead of the ops dma-mapping-common.h always includes dma-coherent.h to get the defintions for them, or the stubs if the architecture doesn't support this feature - checks for ->alloc / ->free presence are removed. There is only one magic instead of dma_map_ops without them (mic_dma_ops) and that one is x86 only anyway. Besides that only x86 needs special treatment to replace a default devices if none is passed and tweak the gfp_flags. An optional arch hook is provided for that. [linux@roeck-us.net: fix build] [jcmvbkbc@gmail.com: fix xtensa] Signed-off-by: NChristoph Hellwig <hch@lst.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Michal Simek <monstr@monstr.eu> Cc: Jonas Bonn <jonas@southpole.se> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Dave Young 提交于
There are two kexec load syscalls, kexec_load another and kexec_file_load. kexec_file_load has been splited as kernel/kexec_file.c. In this patch I split kexec_load syscall code to kernel/kexec.c. And add a new kconfig option KEXEC_CORE, so we can disable kexec_load and use kexec_file_load only, or vice verse. The original requirement is from Ted Ts'o, he want kexec kernel signature being checked with CONFIG_KEXEC_VERIFY_SIG enabled. But kexec-tools use kexec_load syscall can bypass the checking. Vivek Goyal proposed to create a common kconfig option so user can compile in only one syscall for loading kexec kernel. KEXEC/KEXEC_FILE selects KEXEC_CORE so that old config files still work. Because there's general code need CONFIG_KEXEC_CORE, so I updated all the architecture Kconfig with a new option KEXEC_CORE, and let KEXEC selects KEXEC_CORE in arch Kconfig. Also updated general kernel code with to kexec_load syscall. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NDave Young <dyoung@redhat.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Petr Tesarik <ptesarik@suse.cz> Cc: Theodore Ts'o <tytso@mit.edu> Cc: Josh Boyer <jwboyer@fedoraproject.org> Cc: David Howells <dhowells@redhat.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 9月, 2015 1 次提交
-
-
由 Vlastimil Babka 提交于
alloc_pages_exact_node() was introduced in commit 6484eb3e ("page allocator: do not check NUMA node ID when the caller knows the node is valid") as an optimized variant of alloc_pages_node(), that doesn't fallback to current node for nid == NUMA_NO_NODE. Unfortunately the name of the function can easily suggest that the allocation is restricted to the given node and fails otherwise. In truth, the node is only preferred, unless __GFP_THISNODE is passed among the gfp flags. The misleading name has lead to mistakes in the past, see for example commits 5265047a ("mm, thp: really limit transparent hugepage allocation to local node") and b360edb4 ("mm, mempolicy: migrate_to_node should only migrate to node"). Another issue with the name is that there's a family of alloc_pages_exact*() functions where 'exact' means exact size (instead of page order), which leads to more confusion. To prevent further mistakes, this patch effectively renames alloc_pages_exact_node() to __alloc_pages_node() to better convey that it's an optimized variant of alloc_pages_node() not intended for general usage. Both functions get described in comments. It has been also considered to really provide a convenience function for allocations restricted to a node, but the major opinion seems to be that __GFP_THISNODE already provides that functionality and we shouldn't duplicate the API needlessly. The number of users would be small anyway. Existing callers of alloc_pages_exact_node() are simply converted to call __alloc_pages_node(), with the exception of sba_alloc_coherent() which open-codes the check for NUMA_NO_NODE, so it is converted to use alloc_pages_node() instead. This means it no longer performs some VM_BUG_ON checks, and since the current check for nid in alloc_pages_node() uses a 'nid < 0' comparison (which includes NUMA_NO_NODE), it may hide wrong values which would be previously exposed. Both differences will be rectified by the next patch. To sum up, this patch makes no functional changes, except temporarily hiding potentially buggy callers. Restricting the checks in alloc_pages_node() is left for the next patch which can in turn expose more existing buggy callers. Signed-off-by: NVlastimil Babka <vbabka@suse.cz> Acked-by: NJohannes Weiner <hannes@cmpxchg.org> Acked-by: NRobin Holt <robinmholt@gmail.com> Acked-by: NMichal Hocko <mhocko@suse.com> Acked-by: NChristoph Lameter <cl@linux.com> Acked-by: NMichael Ellerman <mpe@ellerman.id.au> Cc: Mel Gorman <mgorman@suse.de> Cc: David Rientjes <rientjes@google.com> Cc: Greg Thelen <gthelen@google.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Gleb Natapov <gleb@kernel.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Cliff Whickman <cpw@sgi.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 04 9月, 2015 1 次提交
-
-
由 Michal Marek 提交于
We cannot detect clang before including the arch Makefile, because that can set the default cross compiler. We also cannot detect clang after including the arch Makefile, because powerpc wants to know about clang. Solve this by using an deferred variable. This costs us a few shell invocations, but this is only a constant number. Reported-by: NBehan Webster <behanw@converseincode.com> Reported-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichal Marek <mmarek@suse.com>
-
- 28 8月, 2015 5 次提交
-
-
由 Gavin Shan 提交于
The config space of some PCI devices can't be accessed when their PEs are in frozen state. Otherwise, fenced PHB might be seen. Those PEs are identified with flag EEH_PE_CFG_RESTRICTED, meaing EEH_PE_CFG_BLOCKED is set automatically when the PE is put to frozen state (EEH_PE_ISOLATED). eeh_slot_error_detail() restores PCI device BARs with eeh_pe_restore_bars(), which then calls eeh_ops->restore_config() to reinitialize the PCI device in (OPAL) firmware. eeh_ops->restore_config() produces PCI config access that causes fenced PHB. The problem was reported on below adapter: 0001:01:00.0 0200: 14e4:168e (rev 10) 0001:01:00.0 Ethernet controller: Broadcom Corporation \ NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10) This fixes the issue by skipping eeh_pe_restore_bars() in eeh_slot_error_detail() when EEH_PE_CFG_BLOCKED is set for the PE. Fixes: b6541db1 ("powerpc/eeh: Block PCI config access upon frozen PE") Cc: stable@vger.kernel.org # v4.0+ Reported-by: NManvanthara B. Puttashankar <mputtash@in.ibm.com> Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
This applies cleanup on pci_dn_reconfig_notifier(), no functional changes: * Rename variable "pci" to "pdn" to indicate its purpose clearly. * The parent node can be released at any time. So it should be hold with of_get_parent() before accessing it. * The device node doesn't have to have parent node in theory. More check on this. Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
Commit cca87d30 ("powerpc/pci: Refactor pci_dn") introduced pdn list for SRIOV VFs. It means the pdn is be put into the child list of its parent pdn when the pdn is created. When doing PCI hot unplugging on pSeries, the PCI device node as well as its pdn are released through procfs entry "powerpc/ofdt". Some one else grabs the memory chunk of the pdn and update it accordingly. At the same time, the pdn is still tracked in the child list of parent pdn. It leads to corrupted child list in the parent pdn. This fixes above issue by removing the pdn from the child list of its parent pdn when the device node is detached from the system. Note the pdn is free'd when the device node is released if the device node is dynamic one. Otherwise, the device node as well as the pdn won't be released. Fixes: cca87d30 ("powerpc/pci: Refactor pci_dn") Cc: stable@vger.kernel.org # 4.1+ Reported-by: NSantwana Samantray <santwana.samantray@in.ibm.com> Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Dan Williams 提交于
While pmem is usable as a block device or via DAX mappings to userspace there are several usage scenarios that can not target pmem due to its lack of struct page coverage. In preparation for "hot plugging" pmem into the vmemmap add ZONE_DEVICE as a new zone to tag these pages separately from the ones that are subject to standard page allocations. Importantly "device memory" can be removed at will by userspace unbinding the driver of the device. Having a separate zone prevents allocation and otherwise marks these pages that are distinct from typical uniform memory. Device memory has different lifetime and performance characteristics than RAM. However, since we have run out of ZONES_SHIFT bits this functionality currently depends on sacrificing ZONE_DMA. Cc: H. Peter Anvin <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Rik van Riel <riel@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Jerome Glisse <j.glisse@gmail.com> [hch: various simplifications in the arch interface] Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
None of the implementations currently use it. The common bdev_direct_access() entry point handles all the size checks before calling ->direct_access(). Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 27 8月, 2015 2 次提交
-
-
由 Vasant Hegde 提交于
Commit 84ad6e5c added LEDS support for PowerNV platform. Lets update ppc64_defconfig to pick LEDS driver. PowerNV LEDS driver looks for "/ibm,opal/leds" node in device tree and loads if this node exists. Hence added it as 'm'. Also note that powernv LEDS driver needs NEW_LEDS and LEDS_CLASS as well. Hence added them to config file. mpe: Also add them to pseries_defconfig, which is currently also used for powernv systems. Suggested-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NVasant Hegde <hegdevasant@linux.vnet.ibm.com> Cc: Stewart Smith <stewart@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Alexey Kardashevskiy 提交于
Commit e91c2511 "powerpc/iommu: Cleanup setting of DMA base/offset" expects that the default DMA offset is set from pnv_ioda_setup_bus_dma() which is correct unless it is SRIOV where the code flow is different - at the moment when pnv_ioda_setup_bus_dma() is called, PCI devices for VFs are not created yet. This adds missing set_dma_offset() to pnv_pci_ioda_dma_dev_setup() to cover the case of SRIOV. Note that we still need set_dma_offset() in pnv_ioda_setup_bus_dma() as at the boot time pnv_pci_ioda_dma_dev_setup() is called when no PE was created yet, this happens at the PHB fixup stage. Fixes: e91c2511 ("powerpc/iommu: Cleanup setting of DMA base/offset") Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 26 8月, 2015 1 次提交
-
-
由 Guilherme G. Piccoli 提交于
Since commit 1851617c ("PCI/MSI: Disable MSI at enumeration even if kernel doesn't support MSI"), the setup of dev->msi_cap/msix_cap and the disable of MSI/MSI-X interrupts isn't being done at PCI probe time, as the logic responsible for this was moved in the aforementioned commit from pci_device_add() to pci_setup_device(). The latter function is not reachable on PowerPC pseries platform during Open Firmware PCI probing time. This exhibits as drivers not being able to enable MSI, eg: bnx2x 0000:01:00.0: no msix capability found This patch calls pci_msi_setup_pci_dev() explicitly to disable MSI/MSI-X during PCI probe time on pSeries platform. Fixes: 1851617c ("PCI/MSI: Disable MSI at enumeration even if kernel doesn't support MSI") [mpe: Flesh out change log and clarify comment] Signed-off-by: NGuilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 22 8月, 2015 1 次提交
-
-
由 Michael Ellerman 提交于
When I merged the OPAL support for the powernv LEDS driver I missed a hunk. This is slightly modified from the original patch, as the original added code to opal-api.h which is not in the skiboot version, which is discouraged. Instead those values are moved into the driver, which is the only place they are used. Fixes: 8a8d9181 ("powerpc/powernv: Add OPAL interfaces for accessing and modifying system LED states") Reviewed-by: NVasant Hegde <hegdevasant@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 21 8月, 2015 1 次提交
-
-
由 Ross Zwisler 提交于
Update the annotation for the kaddr pointer returned by direct_access() so that it is a __pmem pointer. This is consistent with the PMEM driver and with how this direct_access() pointer is used in the DAX code. Signed-off-by: NRoss Zwisler <ross.zwisler@linux.intel.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 20 8月, 2015 5 次提交
-
-
由 Samuel Mendoza-Jonas 提交于
On powernv secondary cpus are returned to OPAL, and will then enter the target kernel in big-endian. However if it is set the HILE bit will persist, causing the first exception in the target kernel to be delivered in litte-endian regardless of the current endianness. If running on top of OPAL make sure the HILE bit is reset once we've finished waiting for all of the secondaries to be returned to OPAL. Signed-off-by: NSamuel Mendoza-Jonas <sam.mj@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Samuel Mendoza-Jonas 提交于
If the target kernel does not inlcude the FIXUP_ENDIAN check, coming from a different-endian kernel will cause the target kernel to panic. All ppc64 kernels can handle starting in big-endian mode, so return to big-endian before branching into the target kernel. This mainly affects pseries as secondaries on powernv are returned to OPAL. Signed-off-by: NSamuel Mendoza-Jonas <sam.mj@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Vasant Hegde 提交于
This patch adds platform devices for leds. Also export LED related OPAL API's so that led driver can use these APIs. Signed-off-by: NVasant Hegde <hegdevasant@linux.vnet.ibm.com> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anshuman Khandual 提交于
This patch registers the following two new OPAL interfaces calls for the platform LED subsystem. With the help of these new OPAL calls, the kernel will be able to get or set the state of various individual LEDs on the system at any given location code which is passed through the LED specific device tree nodes. (1) OPAL_LEDS_GET_INDICATOR opal_leds_get_ind (2) OPAL_LEDS_SET_INDICATOR opal_leds_set_ind Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NVasant Hegde <hegdevasant@linux.vnet.ibm.com> Acked-by: NStewart Smith <stewart@linux.vnet.ibm.com> Tested-by: NStewart Smith <stewart@linux.vnet.ibm.com> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Wei Yang 提交于
On powernv platform, IOV BAR would be shifted if necessary. While the log message is not correct when disabling VFs. This patch fixes this by print correct message based on the offset value. Signed-off-by: NWei Yang <weiyang@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 19 8月, 2015 2 次提交
-
-
由 Gerhard Sittig 提交于
the PPC_MPC512x config automatically selected USB_EHCI_BIG_ENDIAN_* switches, which made Kconfig warn about "unmet direct dependencies": scripts/kconfig/conf --silentoldconfig Kconfig warning: (PPC_MPC512x && 440EPX) selects USB_EHCI_BIG_ENDIAN_DESC which has unmet direct dependencies (USB_SUPPORT && USB && USB_EHCI_HCD) warning: (PPC_MPC512x && PPC_PS3 && PPC_CELLEB && 440EPX) selects USB_EHCI_BIG_ENDIAN_MMIO which has unmet direct dependencies (USB_SUPPORT && USB && USB_EHCI_HCD) warning: (PPC_MPC512x && 440EPX) selects USB_EHCI_BIG_ENDIAN_DESC which has unmet direct dependencies (USB_SUPPORT && USB && USB_EHCI_HCD) warning: (PPC_MPC512x && PPC_PS3 && PPC_CELLEB && 440EPX) selects USB_EHCI_BIG_ENDIAN_MMIO which has unmet direct dependencies (USB_SUPPORT && USB && USB_EHCI_HCD) make the selected entries additionally depend on USB_EHCI_HCD which silences the warning Signed-off-by: NGerhard Sittig <gsi@denx.de> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Hari Bathini 提交于
Pstore only supports one backend at a time. The preferred pstore backend is set by passing the pstore.backend=<name> argument to the kernel at boot time. Currently, while trying to register with pstore, nvram throws an error message even when "pstore.backend != nvram", which is unnecessary. This patch removes the error message in case "pstore.backend != nvram". Signed-off-by: NHari Bathini <hbathini@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 18 8月, 2015 15 次提交
-
-
由 Andrzej Hajda 提交于
The patch was generated using fixed coccinelle semantic patch scripts/coccinelle/api/memdup.cocci [1]. [1]: http://permalink.gmane.org/gmane.linux.kernel/2014320Signed-off-by: NAndrzej Hajda <a.hajda@samsung.com> Reviewed-by: NNathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Andrzej Hajda 提交于
The patch was generated using fixed coccinelle semantic patch scripts/coccinelle/api/memdup.cocci [1]. [1]: http://permalink.gmane.org/gmane.linux.kernel/2014320Signed-off-by: NAndrzej Hajda <a.hajda@samsung.com> Reviewed-by: NNathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
pcibios_set_pcie_reset_state() could be called to complete reset request when passing through PCI device, flag EEH_PE_ISOLATED is set before saving the PCI config sapce. On some Broadcom adapters, EEH_PE_CFG_BLOCKED is automatically set when the flag EEH_PE_ISOLATED is marked. It caused bogus data saved from the PCI config space, which will be restored to the PCI adapter after the reset. Eventually, the hardware can't work with corrupted data in PCI config space. The patch fixes the issue with eeh_pe_state_mark_no_cfg(), which doesn't set EEH_PE_CFG_BLOCKED when seeing EEH_PE_ISOLATED on the PE, in order to avoid the bogus data saved and restored to the PCI config space. Reported-by: NRajanikanth H. Adaveeshaiah <rajanikanth.ha@in.ibm.com> Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
This adds include/uapi/asm/eeh.h to kbuild so that the header file will be exported automatically with below command. The header file was added by commit ed3e81ff ("powerpc/eeh: Move PE state constants around") make INSTALL_HDR_PATH=/tmp/headers \ SRCARCH=powerpc headers_install Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Vaibhav Jain 提交于
A working rtc kernel driver is needed so that hwclock can synchronize system clock to rtc during shutdown/boot. We already have a powernv platform rtc driver located at drivers/rtc/rtc-opal.c. However it depends on CONFIG_RTC_CLASS which is disabled by default. Hence the driver isn't enabled and not compiled for the powernv kernel. We fix this by enabling rtc class support in pseries defconfig which enables this driver and compiles it into the pseries kernel. In case CONFIG_PPC_POWERNV is not enabled we fallback to 'Generic RTC support' driver which emulates the legacy 'PC RTC driver'. Signed-off-by: NVaibhav Jain <vaibhav@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nikunj A Dadhania 提交于
In some situations, a NUMA guest that supports ibm,dynamic-memory-reconfiguration node will end up having flat NUMA distances between nodes. This is because of two problems in the current code. 1) Different representations of associativity lists. There is an assumption about the associativity list in initialize_distance_lookup_table(). Associativity list has two forms: a) [cpu,memory]@x/ibm,associativity has following format: <N> <N integers> b) ibm,dynamic-reconfiguration-memory/ibm,associativity-lookup-arrays <M> <N> <M associativity lists each having N integers> M = the number of associativity lists N = the number of entries per associativity list Fix initialize_distance_lookup_table() so that it does not assume "case a". And update the caller to skip the length field before sending the associativity list. 2) Distance table not getting updated from drconf path. Node distance table will not get initialized in certain cases as ibm,dynamic-reconfiguration-memory path does not initialize the lookup table. Call initialize_distance_lookup_table() from drconf path with appropriate associativity list. Reported-by: NBharata B Rao <bharata@linux.vnet.ibm.com> Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Acked-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Andrew Donnellan 提交于
Simplify the dma_get_required_mask call chain by moving it from pnv_phb to pci_controller_ops, similar to commit 763d2d8d ("powerpc/powernv: Move dma_set_mask from pnv_phb to pci_controller_ops"). Previous call chain: 0) call dma_get_required_mask() (kernel/dma.c) 1) call ppc_md.dma_get_required_mask, if it exists. On powernv, that points to pnv_dma_get_required_mask() (platforms/powernv/setup.c) 2) device is PCI, therefore call pnv_pci_dma_get_required_mask() (platforms/powernv/pci.c) 3) call phb->dma_get_required_mask if it exists 4) it only exists in the ioda case, where it points to pnv_pci_ioda_dma_get_required_mask() (platforms/powernv/pci-ioda.c) New call chain: 0) call dma_get_required_mask() (kernel/dma.c) 1) device is PCI, therefore call pci_controller_ops.dma_get_required_mask if it exists 2) in the ioda case, that points to pnv_pci_ioda_dma_get_required_mask() (platforms/powernv/pci-ioda.c) In the p5ioc2 case, the call chain remains the same - dma_get_required_mask() does not find either a ppc_md call or pci_controller_ops call, so it calls __dma_get_required_mask(). Signed-off-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Reviewed-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
The relation between CONFIG_PPC_HAS_HASH_64K and CONFIG_PPC_64K_PAGES is painfully complicated. But if we rearrange it enough we can see that PPC_HAS_HASH_64K essentially depends on PPC_STD_MMU_64 && PPC_64K_PAGES. We can then notice that PPC_HAS_HASH_64K is used in files that are only built for PPC_STD_MMU_64, meaning it's equivalent to PPC_64K_PAGES. So replace all uses and drop it. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
-
由 Michael Ellerman 提交于
For config options with only a single value, guarding the single value with 'if' is the same as adding a 'depends' statement. And it's more standard to just use 'depends'. And if the option has both an 'if' guard and a 'depends' we can collapse them into a single 'depends' by combining them with &&. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
-
由 Michael Ellerman 提交于
Now that support for 64k pages with a 4K kernel is removed, this code is unreachable. CONFIG_PPC_HAS_HASH_64K can only be true when CONFIG_PPC_64K_PAGES is also true. But when CONFIG_PPC_64K_PAGES is true we include pte-hash64.h which includes pte-hash64-64k.h, which defines both pte_pagesize_index() and crucially __real_pte, which means this definition can never be used. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
-
由 Michael Ellerman 提交于
Back in the olden days we added support for using 64K pages to map the SPU (Synergistic Processing Unit) local store on Cell, when the main kernel was using 4K pages. This was useful at the time because distros were using 4K pages, but using 64K pages on the SPUs could reduce TLB pressure there. However these days the number of Cell users is approaching zero, and supporting this option adds unpleasant complexity to the memory management code. So drop the option, CONFIG_SPU_FS_64K_LS, and all related code. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Acked-by: NJeremy Kerr <jk@ozlabs.org>
-
由 Michael Ellerman 提交于
The powerpc kernel can be built to have either a 4K PAGE_SIZE or a 64K PAGE_SIZE. However when built with a 4K PAGE_SIZE there is an additional config option which can be enabled, PPC_HAS_HASH_64K, which means the kernel also knows how to hash a 64K page even though the base PAGE_SIZE is 4K. This is used in one obscure configuration, to support 64K pages for SPU local store on the Cell processor when the rest of the kernel is using 4K pages. In this configuration, pte_pagesize_index() is defined to just pass through its arguments to get_slice_psize(). However pte_pagesize_index() is called for both user and kernel addresses, whereas get_slice_psize() only knows how to handle user addresses. This has been broken forever, however until recently it happened to work. That was because in get_slice_psize() the large kernel address would cause the right shift of the slice mask to return zero. However in commit 7aa0727f ("powerpc/mm: Increase the slice range to 64TB"), the get_slice_psize() code was changed so that instead of a right shift we do an array lookup based on the address. When passed a kernel address this means we index way off the end of the slice array and return random junk. That is only fatal if we happen to hit something non-zero, but when we do return a non-zero value we confuse the MMU code and eventually cause a check stop. This fix is ugly, but simple. When we're called for a kernel address we return 4K, which is always correct in this configuration, otherwise we use the slice mask. Fixes: 7aa0727f ("powerpc/mm: Increase the slice range to 64TB") Reported-by: NCyril Bur <cyrilbur@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
-
由 Jaiprakash Singh 提交于
IFC NAND chip select is wrongly mapped to 2 in reg property of NAND node. Due to this kernel is not able probe NAND flash. Set chip select to 1 in reg property. Signed-off-by: NJaiprakash Singh <b44839@freescale.com> Signed-off-by: NShengzhou Liu <Shengzhou.Liu@freescale.com> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Wang Dongsheng 提交于
Signed-off-by: NWang Dongsheng <dongsheng.wang@freescale.com> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Priyanka Jain 提交于
T1040D4RDB/T1042D4RDB are Freescale Reference Design Board which can support T1040/T1042 QorIQ Power Architecture™ processor respectively T1040D4RDB/T1042D4RDB board Overview ------------------------------------- - SERDES Connections, 8 lanes supporting: - PCI - SGMII - SATA 2.0 - QSGMII(only for T1040D4RDB) - DDR Controller - Supports rates of up to 1600 MHz data-rate - Supports one DDR4 UDIMM -IFC/Local Bus - NAND flash: 1GB 8-bit NAND flash - NOR: 128MB 16-bit NOR Flash - Ethernet - Two on-board RGMII 10/100/1G ethernet ports. - PHY #0 remains powered up during deep-sleep - CPLD - Clocks - System and DDR clock (SYSCLK, “DDRCLK”) - SERDES clocks - Power Supplies - USB - Supports two USB 2.0 ports with integrated PHYs - Two type A ports with 5V@1.5A per port. - SDHC - SDHC/SDXC connector - SPI - On-board 64MB SPI flash - I2C - Devices connected: EEPROM, thermal monitor, VID controller - Other IO - Two Serial ports - ProfiBus port Add support for T1040/T1042D4RDB board: -add device tree -Add entry in corenet_generic.c Signed-off-by: NPriyanka Jain <Priyanka.Jain@freescale.com> Signed-off-by: NScott Wood <scottwood@freescale.com>
-