- 26 3月, 2008 15 次提交
-
-
由 Josh Boyer 提交于
The AMCC 440EP Yosemite board is very similar to the original AMCC Bamboo board. This adds a YOSEMITE option to Kconfig, and reuses the existing bamboo board support in the kernel. Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
-
由 Stefan Roese 提交于
Canyonlands is the AMCC 460EX eval board, featuring nearly all of the 460EX interfaces: - 1 * PCI (max 66MHz), 2 * PCIe (one 4-lane, one 1-lane) - 2 * GBit Ethernet with TCP/IP acceleration - USB 2.0 Host/Device OTG and Host interface - SATA port Signed-off-by: NStefan Roese <sr@denx.de> Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
-
由 Nathan Lynch 提交于
scanlog_init() could use some love. * properly return -ENODEV if this system doesn't support scan-log-dump * don't printk if scan-log-dump not present; only older systems have it * convert from create_proc_entry() to preferred proc_create() * allocate zeroed data buffer * fix potential memory leak of ent->data on failed create_proc_entry() * simplify control flow Signed-off-by: NNathan Lynch <ntl@pobox.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Manish Ahuja 提交于
This adds /sys/kernel/phyp_dump_active so that kdump init scripts may look for it and take appropriate action if this file is found. This file is only created when phyp_dump has been registered. Signed-off-by: NManish Ahuja <mahuja@us.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Manish Ahuja 提交于
This adds a kernel command line option "phyp_dump", which takes a 0/1 value for disabling/ enabling phyp_dump at boot time. Kdump can use this on cmdline (phyp_dump=0) to disable phyp-dump during boot when enabling itself. This will ensure only one dumping mechanism is active at any given time. Signed-off-by: NManish Ahuja <mahuja@us.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Manish Ahuja 提交于
This tracks the size freed. For now it does a simple rudimentary calculation of the ranges freed. The idea is to keep it simple at the external shell script level and send in large chunks for now. Signed-off-by: NManish Ahuja <mahuja@us.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Manish Ahuja 提交于
This adds routines to a. invalidate dump b. calculate region that is reserved and needs to be freed. This is exported through sysfs interface. Unregister has been removed for now as it wasn't being used. Signed-off-by: NManish Ahuja <mahuja@us.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Manish Ahuja 提交于
Provide some basic debugging support. Signed-off-by: NManish Ahuja <mahuja@us.ibm.com> Signed-off-by: NLinas Vepstas <linasvepstas@gmail.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Manish Ahuja 提交于
Set up the actual dump header, register it with the hypervisor. Signed-off-by: NManish Ahuja <mahuja@us.ibm.com> Signed-off-by: NLinas Vepstas <linasvepstas@gmail.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Manish Ahuja 提交于
Check to see if there actually is data from a previously crashed kernel waiting. If so, allow user-space tools to grab the data (by reading /proc/kcore). When user-space finishes dumping a section, it must release that memory by writing to sysfs. For example, echo "0x40000000 0x10000000" > /sys/kernel/release_region will release 256MB starting at the 1GB. The released memory becomes free for general use. Signed-off-by: NLinas Vepstas <linasvepstas@gmail.com> Signed-off-by: NManish Ahuja <mahuja@us.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Manish Ahuja 提交于
Initial patch for reserving memory in early boot, and freeing it later. If the previous boot had ended with a crash, the reserved memory would contain a copy of the crashed kernel data. Signed-off-by: NManish Ahuja <mahuja@us.ibm.com> Signed-off-by: NLinas Vepstas <linasvepstas@gmail.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 S.Çağlar Onur 提交于
The functions time_before, time_before_eq, time_after, and time_after_eq are more robust for comparing jiffies against other values. This implements usage of the time_after() macro, defined at linux/jiffies.h, which deals with wrapping correctly. Signed-off-by: NS.Çağlar Onur <caglar@pardus.org.tr> Acked-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Tony Breeds 提交于
The hypervisor can look at the value in the wait_state_cycles field of the VPA for an estimate of how busy dedicated processors are. Currently, as the kernel never touches this field, we appear to be 100% busy. This records the duration the kernel is in powersave and passes that to the HV to provide a reasonable indication of utilisation. Signed-off-by: NTony Breeds <tony@bakeyournoodle.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Nathan Lynch 提交于
This function has been a no-op for about 18 months; it's there in the history should anyone need to resurrect it. Signed-off-by: NNathan Lynch <ntl@pobox.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Nathan Lynch 提交于
Prevailing practice for define_machine() in powerpc is to use the platform name when the platform has only one define_machine() statement, but maple uses "maple_md". This caused me some head-scratching when writing some new code that uses machine_is(maple). Use "maple" instead of "maple_md". There should not be any behavioral change -- fixup_maple_ide() calls machine_is(maple) but the body of the function is ifdef'd out. Signed-off-by: NNathan Lynch <ntl@pobox.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 20 3月, 2008 2 次提交
-
-
由 Michael Ellerman 提交于
The PCI bridge representing the PCIE root complex on Axon, contains device BARs for a memory range and ROM that define inbound accesses. This confuses the kernel resource management code -- the resources need to be hidden when Axon is a host bridge. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
The cell IOMMU code to parse the dma-ranges properties, used for the fixed mapping, was broken in two ways for some devices. Firstly it didn't cope with empty dma-ranges properties. An empty property implies no translation so can be safely skipped. The code also wrongly assumed it would be looking at PCI devices, and hard coded the number of address and size cells. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 13 3月, 2008 1 次提交
-
-
由 Tony Breeds 提交于
When building arch/powerpc/platforms/powermac/pic.c when !CONFIG_ADB_PMU we get the following warnings: arch/powerpc/platforms/powermac/pic.c: In function 'pmacpic_find_viaint': arch/powerpc/platforms/powermac/pic.c:623: warning: label 'not_found' defined but not used This fixes it. Signed-off-by: NTony Breeds <tony@bakeyournoodle.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 11 3月, 2008 2 次提交
-
-
由 Jeremy Kerr 提交于
At present, we can hit the BUG_ON in __spu_update_sched_info by reading the regs file of a context between two calls to spu_run. The spu_release_saved called by spufs_regs_read() is resulting in the (now non-runnable) context being placed back on the run queue, so the next call to spu_run ends up in the bug condition. This change uses the SPU_SCHED_SPU_RUN flag to only reschedule a context if it's still in spu_run(). Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
-
由 Jeremy Kerr 提交于
commit 4ef11014 introduced a usage of SCHED_IDLE to detect when a context is within spu_run. Instead of SCHED_IDLE (which has other meaning), add a flag to sched_flags to tell if a context should be running. Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
-
- 10 3月, 2008 1 次提交
-
-
由 Andy Fleming 提交于
Not all e300 cores support the performance monitors, and the ones that don't will be confused by the mf/mtpmr instructions. This allows the support to be optional, so the 8349 can turn it off while the 8379 can turn it on. Sadly, those aren't config options, so it will be left to the defconfigs and the users to make that determination. Signed-off-by: NAndy Fleming <afleming@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 07 3月, 2008 1 次提交
-
-
由 Li Yang 提交于
Signed-off-by: NLi Yang <leoli@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 06 3月, 2008 3 次提交
-
-
由 Olof Johansson 提交于
Used to allocate functions for crypto/checksum offload. Signed-off-by: NOlof Johansson <olof@lixom.net> Acked-by: NJeff Garzik <jgarzik@pobox.com>
-
由 Olof Johansson 提交于
Add functions to manage the channel syncronization flags to dma_lib Signed-off-by: NOlof Johansson <olof@lixom.net> Acked-by: NJeff Garzik <jgarzik@pobox.com>
-
由 Olof Johansson 提交于
Also stop both rx and tx sections before changing the configuration of the dma device during init. Signed-off-by: NOlof Johansson <olof@lixom.net> Acked-by: NJeff Garzik <jgarzik@pobox.com>
-
- 03 3月, 2008 10 次提交
-
-
由 Michael Ellerman 提交于
The only tricky part is we need to adjust the PTE insertion loop to cater for holes in the page table. The PTEs for each segment start on a 4K boundary, so with 16M pages we have 16 PTEs per segment and then a gap to the next 4K page boundary. It might be possible to allocate the PTEs for each segment separately, saving the memory currently filling the gaps. However we'd need to check that's OK with the hardware, and that it actually saves memory. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Michael Ellerman 提交于
Make some preliminary changes to cell_iommu_alloc_ptab() to allow it to take the page size as a parameter rather than assuming IOMMU_PAGE_SIZE. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Michael Ellerman 提交于
We use n_pte_pages to calculate the stride through the page tables, but we also use it to set the NPPT value in the segment table entry. That is defined as the number of 4K pages per segment, so we should calculate it as such regardless of the IOMMU page size. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Michael Ellerman 提交于
Currently the cell IOMMU code allocates the entire IOMMU page table in a contiguous chunk. This is nice and tidy, but for machines with larger amounts of RAM the page table allocation can fail due to it simply being too large. So split the segment table and page table setup routine, and arrange to have the dynamic and fixed page tables allocated separately. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Michael Ellerman 提交于
There's no need to allocate the pad page unless we're going to actually use it - so move the allocation to where we know we're going to use it. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Michael Ellerman 提交于
The cell IOMMU code no longer needs to save the pte_offset variable separately, it is incorporated into tbl->it_offset. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Michael Ellerman 提交于
The cell IOMMU tce build and free routines use pte_offset to convert the index passed from the generic IOMMU code into a page table offset. This takes into account the SPIDER_DMA_OFFSET which sets the top bit of every DMA address. However it doesn't cater for the IOMMU window starting at a non-zero address, as the base of the window is not incorporated into pte_offset at all. As it turns out tbl->it_offset already contains the value we need, it takes into account the base of the window and also pte_offset. So use it instead! Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Michael Ellerman 提交于
It's called the fixed mapping, not the static mapping. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Jens Osterkamp 提交于
Ulrich Weigand has found that the hardware watchpoints on cell were not working back in November : http://ozlabs.org/pipermail/linuxppc-dev/2007-November/046135.html This patch sets them during initialization. Signed-off-by: NJens Osterkamp <jens@de.ibm.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
由 Jens Osterkamp 提交于
This moves the private DABRX definitions for celleb from beat.h to reg.h to make them usable for all. Signed-off-by: NJens Osterkamp <jens@de.ibm.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 29 2月, 2008 5 次提交
-
-
由 Andre Detsch 提交于
The spu_runcntl_RW register is restored within spu_restore function. So, at the end of spu_bind_context, the SPU context is not just loaded, but running. This change corrects the state switch to account the time as USER. Signed-off-by: NAndre Detsch <adetsch@br.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
-
由 Arnd Bergmann 提交于
There is a potential race between flushes of the entire SLB in the MFC and the point where new entries are being established. The problem is that we might put a ESID entry into the MFC SLB when the VSID entry has just been cleared by the global flush. This can be circumvented by holding the register_lock throughout both the flushing and the creation of SLB entries. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
-
由 Arnd Bergmann 提交于
When we replace an SLB entry in the MFC after using up all the available entries, there is a short window in which an incorrect entry is marked as valid. The problem is that the 'valid' bit is stored in the ESID, which is always written after the VSID. Overwriting the VSID first will make the original ESID entry point to the new VSID, which means that any concurrent DMA accessing the old ESID ends up being redirected to the new virtual address. A few cycles later, we write the new ESID and everything is fine again. That race can be closed by writing a zero entry to the ESID first, which makes sure that the VSID is not accessed until we write the new ESID. Note that we don't actually need to invalidate the SLB entry using the invalidation register, which would also flush any ERAT entries for that segment, because the segment translation does not become invalid but is only removed from the SLB cache. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
-
由 Arnd Bergmann 提交于
There is a small race between the context save procedure and the SPU interrupt handling, where we expect all interrupt processing to have finished after disabling them, while an interrupt is still being processed on another CPU. The obvious fix is to call synchronize_irq() after disabling the interrupts at the start of the context save procedure to make sure we never access the SPU any more during an ongoing save or even after that. Thanks to Benjamin Herrenschmidt for pointing this out. Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
-
由 Jeremy Kerr 提交于
Currently, we get the following output from sputrace: [5.097935954] 1606: spufs_ps_nopfn__enter (thread = 1605, spu = -1) [5.097958164] 1606: spufs_ps_nopfn__insert (thread = 1605, spu = 15) [5.097973529] 1607: spufs_ps_nopfn__enter (thread = 1605, spu = -1) [5.097989174] 1607: spufs_ps_nopfn__insert (thread = 1605, spu = 14) Which leads me to believe that 160[67] is the current thread ID, and 1605 is the context backing the psmap. However, the 'current' and 'owner' tids are reversed - the 'current' tid is on the right. This change puts the current thread ID in the left-hand column instead, and renames the right to 'ctxthread'. Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
-