- 20 8月, 2015 4 次提交
-
-
由 Wei Yang 提交于
On powernv platform, IOV BAR would be shifted if necessary. While the log message is not correct when disabling VFs. This patch fixes this by print correct message based on the offset value. Signed-off-by: NWei Yang <weiyang@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Andrew Donnellan 提交于
If we open a context but do not start it (either because we do not attempt to start it, or because it fails to start for some reason), we are left with a context in state OPENED. Previously, cxl_release_context() only allowed releasing contexts in state CLOSED, so attempting to release an OPENED context would fail. In particular, this bug causes available contexts to run out after some EEH failures, where drivers attempt to release contexts that have failed to start. Allow releasing contexts in any state with a value lower than STARTED, i.e. OPENED or CLOSED (we can't release a STARTED context as it's currently using the hardware, and we assume that contexts in any new states which may be added in future with a value higher than STARTED are also unsafe to release). Cc: stable@vger.kernel.org Fixes: 6f7f0b3d ("cxl: Add AFU virtual PHB and kernel API") Signed-off-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: NDaniel Axtens <dja@axtens.net> Acked-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Javier Martinez Canillas 提交于
The I2C core always reports the MODALIAS uevent as "i2c:<client name" regardless if the driver was matched using the I2C id_table or the of_match_table. So technically there's no need for a driver to export the OF table since currently it's not used. In fact, the I2C device ID table is mandatory for I2C drivers since a i2c_device_id is passed to the driver's probe function even if the I2C core used the OF table to match the driver. And since the I2C core uses different tables, OF-only drivers needs to have duplicated data that has to be kept in sync and also the dev node compatible manufacturer prefix is stripped when reporting the MODALIAS. To avoid the above, the I2C core behavior may be changed in the future to not require an I2C device table for OF-only drivers and report the OF module alias. So, it's better to also export the OF table to prevent breaking module autoloading if that happens. Signed-off-by: NJavier Martinez Canillas <javier@osg.samsung.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Javier Martinez Canillas 提交于
The I2C core always reports the MODALIAS uevent as "i2c:<client name" regardless if the driver was matched using the I2C id_table or the of_match_table. So the driver needs to export the I2C table and this be built into the module or udev won't have the necessary information to auto load the correct module when the device is added. Signed-off-by: NJavier Martinez Canillas <javier@osg.samsung.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 19 8月, 2015 2 次提交
-
-
由 Gerhard Sittig 提交于
the PPC_MPC512x config automatically selected USB_EHCI_BIG_ENDIAN_* switches, which made Kconfig warn about "unmet direct dependencies": scripts/kconfig/conf --silentoldconfig Kconfig warning: (PPC_MPC512x && 440EPX) selects USB_EHCI_BIG_ENDIAN_DESC which has unmet direct dependencies (USB_SUPPORT && USB && USB_EHCI_HCD) warning: (PPC_MPC512x && PPC_PS3 && PPC_CELLEB && 440EPX) selects USB_EHCI_BIG_ENDIAN_MMIO which has unmet direct dependencies (USB_SUPPORT && USB && USB_EHCI_HCD) warning: (PPC_MPC512x && 440EPX) selects USB_EHCI_BIG_ENDIAN_DESC which has unmet direct dependencies (USB_SUPPORT && USB && USB_EHCI_HCD) warning: (PPC_MPC512x && PPC_PS3 && PPC_CELLEB && 440EPX) selects USB_EHCI_BIG_ENDIAN_MMIO which has unmet direct dependencies (USB_SUPPORT && USB && USB_EHCI_HCD) make the selected entries additionally depend on USB_EHCI_HCD which silences the warning Signed-off-by: NGerhard Sittig <gsi@denx.de> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Hari Bathini 提交于
Pstore only supports one backend at a time. The preferred pstore backend is set by passing the pstore.backend=<name> argument to the kernel at boot time. Currently, while trying to register with pstore, nvram throws an error message even when "pstore.backend != nvram", which is unnecessary. This patch removes the error message in case "pstore.backend != nvram". Signed-off-by: NHari Bathini <hbathini@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 18 8月, 2015 13 次提交
-
-
由 Ian Munsie 提交于
userspace programs using cxl currently have to use two strategies for dealing with MMIO errors simultaneously. They have to check every read for a return of all Fs in case the adapter has gone away and the kernel has not yet noticed, and they have to deal with SIGBUS in case the kernel has already noticed, invalidated the mapping and marked the context as failed. In order to simplify things, this patch adds an alternative approach where the kernel will return a page filled with Fs instead of delivering a SIGBUS. This allows userspace to only need to deal with one of these two error paths, and is intended for use in libraries that use cxl transparently and may not be able to safely install a signal handler. This approach will only work if certain constraints are met. Namely, if the application is both reading and writing to an address in the problem state area it cannot assume that a non-FF read is OK, as it may just be reading out a value it has previously written. Further - since only one page is used per context a write to a given offset would be visible when reading the same offset from a different page in the mapping (this only applies within a single context, not between contexts). An application could deal with this by e.g. making sure it also reads from a read-only offset after any reads to a read/write offset. Due to these constraints, this functionality must be explicitly requested by userspace when starting the context by passing in the CXL_START_WORK_ERR_FF flag. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Acked-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Andrzej Hajda 提交于
The patch was generated using fixed coccinelle semantic patch scripts/coccinelle/api/memdup.cocci [1]. [1]: http://permalink.gmane.org/gmane.linux.kernel/2014320Signed-off-by: NAndrzej Hajda <a.hajda@samsung.com> Reviewed-by: NNathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Andrzej Hajda 提交于
The patch was generated using fixed coccinelle semantic patch scripts/coccinelle/api/memdup.cocci [1]. [1]: http://permalink.gmane.org/gmane.linux.kernel/2014320Signed-off-by: NAndrzej Hajda <a.hajda@samsung.com> Reviewed-by: NNathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
pcibios_set_pcie_reset_state() could be called to complete reset request when passing through PCI device, flag EEH_PE_ISOLATED is set before saving the PCI config sapce. On some Broadcom adapters, EEH_PE_CFG_BLOCKED is automatically set when the flag EEH_PE_ISOLATED is marked. It caused bogus data saved from the PCI config space, which will be restored to the PCI adapter after the reset. Eventually, the hardware can't work with corrupted data in PCI config space. The patch fixes the issue with eeh_pe_state_mark_no_cfg(), which doesn't set EEH_PE_CFG_BLOCKED when seeing EEH_PE_ISOLATED on the PE, in order to avoid the bogus data saved and restored to the PCI config space. Reported-by: NRajanikanth H. Adaveeshaiah <rajanikanth.ha@in.ibm.com> Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gavin Shan 提交于
This adds include/uapi/asm/eeh.h to kbuild so that the header file will be exported automatically with below command. The header file was added by commit ed3e81ff ("powerpc/eeh: Move PE state constants around") make INSTALL_HDR_PATH=/tmp/headers \ SRCARCH=powerpc headers_install Signed-off-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Vaibhav Jain 提交于
A working rtc kernel driver is needed so that hwclock can synchronize system clock to rtc during shutdown/boot. We already have a powernv platform rtc driver located at drivers/rtc/rtc-opal.c. However it depends on CONFIG_RTC_CLASS which is disabled by default. Hence the driver isn't enabled and not compiled for the powernv kernel. We fix this by enabling rtc class support in pseries defconfig which enables this driver and compiles it into the pseries kernel. In case CONFIG_PPC_POWERNV is not enabled we fallback to 'Generic RTC support' driver which emulates the legacy 'PC RTC driver'. Signed-off-by: NVaibhav Jain <vaibhav@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nikunj A Dadhania 提交于
In some situations, a NUMA guest that supports ibm,dynamic-memory-reconfiguration node will end up having flat NUMA distances between nodes. This is because of two problems in the current code. 1) Different representations of associativity lists. There is an assumption about the associativity list in initialize_distance_lookup_table(). Associativity list has two forms: a) [cpu,memory]@x/ibm,associativity has following format: <N> <N integers> b) ibm,dynamic-reconfiguration-memory/ibm,associativity-lookup-arrays <M> <N> <M associativity lists each having N integers> M = the number of associativity lists N = the number of entries per associativity list Fix initialize_distance_lookup_table() so that it does not assume "case a". And update the caller to skip the length field before sending the associativity list. 2) Distance table not getting updated from drconf path. Node distance table will not get initialized in certain cases as ibm,dynamic-reconfiguration-memory path does not initialize the lookup table. Call initialize_distance_lookup_table() from drconf path with appropriate associativity list. Reported-by: NBharata B Rao <bharata@linux.vnet.ibm.com> Signed-off-by: NNikunj A Dadhania <nikunj@linux.vnet.ibm.com> Acked-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Andrew Donnellan 提交于
Simplify the dma_get_required_mask call chain by moving it from pnv_phb to pci_controller_ops, similar to commit 763d2d8d ("powerpc/powernv: Move dma_set_mask from pnv_phb to pci_controller_ops"). Previous call chain: 0) call dma_get_required_mask() (kernel/dma.c) 1) call ppc_md.dma_get_required_mask, if it exists. On powernv, that points to pnv_dma_get_required_mask() (platforms/powernv/setup.c) 2) device is PCI, therefore call pnv_pci_dma_get_required_mask() (platforms/powernv/pci.c) 3) call phb->dma_get_required_mask if it exists 4) it only exists in the ioda case, where it points to pnv_pci_ioda_dma_get_required_mask() (platforms/powernv/pci-ioda.c) New call chain: 0) call dma_get_required_mask() (kernel/dma.c) 1) device is PCI, therefore call pci_controller_ops.dma_get_required_mask if it exists 2) in the ioda case, that points to pnv_pci_ioda_dma_get_required_mask() (platforms/powernv/pci-ioda.c) In the p5ioc2 case, the call chain remains the same - dma_get_required_mask() does not find either a ppc_md call or pci_controller_ops call, so it calls __dma_get_required_mask(). Signed-off-by: NAndrew Donnellan <andrew.donnellan@au1.ibm.com> Reviewed-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
The relation between CONFIG_PPC_HAS_HASH_64K and CONFIG_PPC_64K_PAGES is painfully complicated. But if we rearrange it enough we can see that PPC_HAS_HASH_64K essentially depends on PPC_STD_MMU_64 && PPC_64K_PAGES. We can then notice that PPC_HAS_HASH_64K is used in files that are only built for PPC_STD_MMU_64, meaning it's equivalent to PPC_64K_PAGES. So replace all uses and drop it. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
-
由 Michael Ellerman 提交于
For config options with only a single value, guarding the single value with 'if' is the same as adding a 'depends' statement. And it's more standard to just use 'depends'. And if the option has both an 'if' guard and a 'depends' we can collapse them into a single 'depends' by combining them with &&. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
-
由 Michael Ellerman 提交于
Now that support for 64k pages with a 4K kernel is removed, this code is unreachable. CONFIG_PPC_HAS_HASH_64K can only be true when CONFIG_PPC_64K_PAGES is also true. But when CONFIG_PPC_64K_PAGES is true we include pte-hash64.h which includes pte-hash64-64k.h, which defines both pte_pagesize_index() and crucially __real_pte, which means this definition can never be used. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
-
由 Michael Ellerman 提交于
Back in the olden days we added support for using 64K pages to map the SPU (Synergistic Processing Unit) local store on Cell, when the main kernel was using 4K pages. This was useful at the time because distros were using 4K pages, but using 64K pages on the SPUs could reduce TLB pressure there. However these days the number of Cell users is approaching zero, and supporting this option adds unpleasant complexity to the memory management code. So drop the option, CONFIG_SPU_FS_64K_LS, and all related code. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Acked-by: NJeremy Kerr <jk@ozlabs.org>
-
由 Michael Ellerman 提交于
The powerpc kernel can be built to have either a 4K PAGE_SIZE or a 64K PAGE_SIZE. However when built with a 4K PAGE_SIZE there is an additional config option which can be enabled, PPC_HAS_HASH_64K, which means the kernel also knows how to hash a 64K page even though the base PAGE_SIZE is 4K. This is used in one obscure configuration, to support 64K pages for SPU local store on the Cell processor when the rest of the kernel is using 4K pages. In this configuration, pte_pagesize_index() is defined to just pass through its arguments to get_slice_psize(). However pte_pagesize_index() is called for both user and kernel addresses, whereas get_slice_psize() only knows how to handle user addresses. This has been broken forever, however until recently it happened to work. That was because in get_slice_psize() the large kernel address would cause the right shift of the slice mask to return zero. However in commit 7aa0727f ("powerpc/mm: Increase the slice range to 64TB"), the get_slice_psize() code was changed so that instead of a right shift we do an array lookup based on the address. When passed a kernel address this means we index way off the end of the slice array and return random junk. That is only fatal if we happen to hit something non-zero, but when we do return a non-zero value we confuse the MMU code and eventually cause a check stop. This fix is ugly, but simple. When we're called for a kernel address we return 4K, which is always correct in this configuration, otherwise we use the slice mask. Fixes: 7aa0727f ("powerpc/mm: Increase the slice range to 64TB") Reported-by: NCyril Bur <cyrilbur@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
-
- 17 8月, 2015 3 次提交
-
-
由 Michael Ellerman 提交于
We forgot to install the tempfile, so when the selftests are installed and then run the subpage_prot_file test fails. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Vaibhav Jain 提交于
This patch plugs the leak of irq_bitmap, allocated as part of initialization of cxl_context struct; during the call to afu_allocate_irqs. The bitmap is now release during the call to function afu_release_irqs. Reported-by: NMatthew R. Ochs <mrochs@linux.vnet.ibm.com> Signed-off-by: NVaibhav Jain <vaibhav@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
CONFIG_CXL_EEH is for CXL's EEH related code. Other drivers can depend on or #ifdef on this symbol to configure PERST behaviour, allowing CXL to participate in the EEH process. Reviewed-by: NCyril Bur <cyrilbur@gmail.com> Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 14 8月, 2015 12 次提交
-
-
由 Daniel Axtens 提交于
EEH (Enhanced Error Handling) allows a driver to recover from the temporary failure of an attached PCI card. Enable basic CXL support for EEH. Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
Provide a kernel API and a sysfs entry which allow a user to specify that when a card is PERSTed, it's image will stay the same, allowing it to participate in EEH. cxl_reset is used to reflash the card. In that case, we cannot safely assert that the image will not change. Therefore, disallow cxl_reset if the flag is set. Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
If the driver doesn't participate in EEH, the AFUs will be removed by cxl_remove, which will be invoked by EEH. If the driver does particpate in EEH, the vPHB needs to stick around so that the it can particpate. In both cases, we shouldn't remove the AFU/vPHB. Reviewed-by: NCyril Bur <cyrilbur@gmail.com> Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
As with an adapter, some aspects of initialisation are done only once in the lifetime of an AFU: for example, allocating memory, or setting up sysfs/debugfs files. However, we may want to be able to do some parts of the initialisation multiple times: for example, in error recovery we want to be able to tear down and then re-map IO memory and IRQs. Therefore, refactor AFU init/teardown as follows. - Create two new functions: 'cxl_configure_afu', and its pair 'cxl_deconfigure_afu'. As with the adapter functions, these (de)configure resources that do not need to last the entire lifetime of the AFU. - Allocating and releasing memory remain the task of 'cxl_alloc_afu' and 'cxl_release_afu'. - Once-only functions that do not involve allocating/releasing memory stay in the overarching 'cxl_init_afu'/'cxl_remove_afu' pair. However, the task of picking an AFU mode and activating it has been broken out. Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
Some aspects of initialisation are done only once in the lifetime of an adapter: for example, allocating memory for the adapter, allocating the adapter number, or setting up sysfs/debugfs files. However, we may want to be able to do some parts of the initialisation multiple times: for example, in error recovery we want to be able to tear down and then re-map IO memory and IRQs. Therefore, refactor CXL init/teardown as follows. - Keep the overarching functions 'cxl_init_adapter' and its pair, 'cxl_remove_adapter'. - Move all 'once only' allocation/freeing steps to the existing 'cxl_alloc_adapter' function, and its pair 'cxl_release_adapter' (This involves moving allocation of the adapter number out of cxl_init_adapter.) - Create two new functions: 'cxl_configure_adapter', and its pair 'cxl_deconfigure_adapter'. These two functions 'wire up' the hardware --- they (de)configure resources that do not need to last the entire lifetime of the adapter Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
- MMIO pointer unmapping is guarded by a null pointer check. However, iounmap doesn't null the pointer, just invalidate it. Therefore, explicitly null the pointer after unmapping. - afu_desc_mmio also needs to be unmapped. - PCI regions are allocated in cxl_map_adapter_regs. Therefore they should be released in unmap, not elsewhere. Acked-by: NCyril Bur <cyrilbur@gmail.com> Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
Check if an IRQ is mapped before releasing it. This will simplify future EEH code by allowing unconditional unmapping of IRQs. Acked-by: NCyril Bur <cyrilbur@gmail.com> Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
Previously the SPA was allocated and freed upon entering and leaving AFU-directed mode. This causes some issues for error recovery - contexts hold a pointer inside the SPA, and they may persist after the AFU has been detached. We would ideally like to allocate the SPA when the AFU is allocated, and release it until the AFU is released. However, we don't know how big the SPA needs to be until we read the AFU descriptor. Therefore, restructure the code: - Allocate the SPA only once, on the first attach. - Release the SPA only when the entire AFU is being released (not detached). Guard the release with a NULL check, so we don't free if it was never allocated (e.g. dedicated mode) Acked-by: NCyril Bur <cyrilbur@gmail.com> Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
If the PCI channel has gone down, don't attempt to poke the hardware. We need to guard every time cxl_whatever_(read|write) is called. This is because a call to those functions will dereference an offset into an mmio register, and the mmio mappings get invalidated in the EEH teardown. Check in the read/write functions in the header. We give them the same semantics as usual PCI operations: - a write to a channel that is down is ignored. - a read from a channel that is down returns all fs. Also, we try to access the MMIO space of a vPHB device as part of the PCI disable path. Because that's a read that bypasses most of our usual checks, we handle it explicitly. As far as user visible warnings go: - Check link state in file ops, return -EIO if down. - Be reasonably quiet if there's an error in a teardown path, or when we already know the hardware is going down. - Throw a big WARN if someone tries to start a CXL operation while the card is down. This gives a useful stacktrace for debugging whatever is doing that. Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
We're about to make these more complex, so make them functions first. Signed-off-by: NDaniel Axtens <dja@axtens.net> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Daniel Axtens 提交于
In the complete hotplug case, EEH PEs are supposed to be released and set to NULL. Normally, this is done by eeh_remove_device(), which is called from pcibios_release_device(). However, if something is holding a kref to the device, it will not be released, and the PE will remain. eeh_add_device_late() has a check for this which will explictly destroy the PE in this case. This check in eeh_add_device_late() occurs after a call to eeh_ops->probe(). On PowerNV, probe is a pointer to pnv_eeh_probe(), which will exit without probing if there is an existing PE. This means that on PowerNV, devices with outstanding krefs will not be rediscovered by EEH correctly after a complete hotplug. This is affecting CXL (CAPI) devices in the field. Put the probe after the kref check so that the PE is destroyed and affected devices are correctly rediscovered by EEH. Fixes: d91dafc0 ("powerpc/eeh: Delay probing EEH device during hotplug") Cc: stable@vger.kernel.org Cc: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NDaniel Axtens <dja@axtens.net> Acked-by: NGavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Gautham R. Shenoy 提交于
Section 3.7 of Version 1.2 of the Power8 Processor User's Manual prescribes that updates to HID0 be preceded by a SYNC instruction and followed by an ISYNC instruction (Page 91). Create an inline function name update_power8_hid0() which follows this recipe and invoke it from the static split core path. Signed-off-by: NGautham R. Shenoy <ego@linux.vnet.ibm.com> Reviewed-by: NSam Bobroff <sam.bobroff@au1.ibm.com> Tested-by: NSam Bobroff <sam.bobroff@au1.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 12 8月, 2015 6 次提交
-
-
由 Anshuman Khandual 提交于
Replace hard coded values with existing DRCONF flags while procesing detected LMBs from the device tree. Does not change any functionality. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anshuman Khandual 提交于
The value of 'valid' is always zero when 'esid' is zero, and if 'esid' is non-zero then the value of 'valid' is irrelevant because we are using logical or in the if expression. In fact 'valid' can be dropped completely from dump_segments() by simply doing the check with SLB_ESID_V directly in the if. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> [mpe: Rewrite change log] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anshuman Khandual 提交于
The code to fetch the SLB size from the device tree wants to first look for "slb-size" and then if that's not found "ibm,slb-size". We can simplify the code by looking for the properties and then if we find one of them we set mmu_slb_size. We also change the function name from check_cpu_slb_size() to init_mmu_slb_size() as the function doesn't check anything, it only initialises mmu_slb_size. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> [mpe: Rewrite change log] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anshuman Khandual 提交于
This patch adds some documentation to patch_slb_encoding() explaining how it works. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> [mpe: Update change log and mention the signedness of the immediate] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anshuman Khandual 提交于
The SLB code uses 'slot' and 'entry' interchangeably, change it to always use 'entry'. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> [mpe: Rewrite change log] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anshuman Khandual 提交于
This patch just removes one redundant entry for one extern variable 'slb_compare_rr_to_size' from the scope. This patch does not change any functionality. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-