- 06 4月, 2009 22 次提交
-
-
由 Peter Zijlstra 提交于
Put in counts to tell which ips belong to what context. ----- | | hv | -- nr | | kernel | -- | | user ----- Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com> Orig-LKML-Reference: <20090402091319.493101305@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Follow the example set by powerpc and try to play nice with oprofile and the nmi watchdog. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NPaul Mackerras <paulus@samba.org> Orig-LKML-Reference: <20090330171024.459968444@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Provide the x86 perf_callchain() implementation. Code based on the ftrace/sysprof code from Soeren Sandmann Pedersen. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NPaul Mackerras <paulus@samba.org> Cc: Soeren Sandmann Pedersen <sandmann@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Steven Rostedt <srostedt@redhat.com> Orig-LKML-Reference: <20090330171024.341993293@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Now that Paul cleaned up the error propagation paths, pass down the x86 error as well. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NPaul Mackerras <paulus@samba.org> Orig-LKML-Reference: <20090330171023.792822360@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Paul Mackerras 提交于
Impact: better error reporting At present, if hw_perf_counter_init encounters an error, all it can do is return NULL, which causes sys_perf_counter_open to return an EINVAL error to userspace. This isn't very informative for userspace; it means that userspace can't tell the difference between "sorry, oprofile is already using the PMU" and "we don't support this CPU" and "this CPU doesn't support the requested generic hardware event". This commit uses the PTR_ERR/ERR_PTR/IS_ERR set of macros to let hw_perf_counter_init return an error code on error rather than just NULL if it wishes. If it does so, that error code will be returned from sys_perf_counter_open to userspace. If it returns NULL, an EINVAL error will be returned to userspace, as before. This also adapts the powerpc hw_perf_counter_init to make use of this to return ENXIO, EINVAL, EBUSY, or EOPNOTSUPP as appropriate. It would be good to add extra error numbers in future to allow userspace to distinguish the various errors that are currently reported as EINVAL, i.e. irq_period < 0, too many events in a group, conflict between exclude_* settings in a group, and PMU resource conflict in a group. [ v2: fix a bug pointed out by Corey Ashford where error returns from hw_perf_counter_init were not handled correctly in the case of raw hardware events.] Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Orig-LKML-Reference: <20090330171023.682428180@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Paul Mackerras 提交于
Impact: cooperate with oprofile At present, on PowerPC, if you have perf_counters compiled in, oprofile doesn't work. There is code to allow the PMU to be shared between competing subsystems, such as perf_counters and oprofile, but currently the perf_counter subsystem reserves the PMU for itself at boot time, and never releases it. This makes perf_counter play nicely with oprofile. Now we keep a count of how many perf_counter instances are counting hardware events, and reserve the PMU when that count becomes non-zero, and release the PMU when that count becomes zero. This means that it is possible to have perf_counters compiled in and still use oprofile, as long as there are no hardware perf_counters active. This also means that if oprofile is active, sys_perf_counter_open will fail if the hw_event specifies a hardware event. To avoid races with other tasks creating and destroying perf_counters, we use a mutex. We use atomic_inc_not_zero and atomic_add_unless to avoid having to take the mutex unless there is a possibility of the count going between 0 and 1. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Orig-LKML-Reference: <20090330171023.627912475@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
While going over the wakeup code I noticed delayed wakeups only work for hardware counters but basically all software counters rely on them. This patch unifies and generalizes the delayed wakeup to fix this issue. Since we're dealing with NMI context bits here, use a cmpxchg() based single link list implementation to track counters that have pending wakeups. [ This should really be generic code for delayed wakeups, but since we cannot use cmpxchg()/xchg() in generic code, I've let it live in the perf_counter code. -- Eric Dumazet could use it to aggregate the network wakeups. ] Furthermore, the x86 method of using TIF flags was flawed in that its quite possible to end up setting the bit on the idle task, loosing the wakeup. The powerpc method uses per-cpu storage and does appear to be sufficient. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NPaul Mackerras <paulus@samba.org> Orig-LKML-Reference: <20090330171023.153932974@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Paul Mackerras 提交于
Impact: new functionality Currently, if there are more counters enabled than can fit on the CPU, the kernel will multiplex the counters on to the hardware using round-robin scheduling. That isn't too bad for sampling counters, but for counting counters it means that the value read from a counter represents some unknown fraction of the true count of events that occurred while the counter was enabled. This remedies the situation by keeping track of how long each counter is enabled for, and how long it is actually on the cpu and counting events. These times are recorded in nanoseconds using the task clock for per-task counters and the cpu clock for per-cpu counters. These values can be supplied to userspace on a read from the counter. Userspace requests that they be supplied after the counter value by setting the PERF_FORMAT_TOTAL_TIME_ENABLED and/or PERF_FORMAT_TOTAL_TIME_RUNNING bits in the hw_event.read_format field when creating the counter. (There is no way to change the read format after the counter is created, though it would be possible to add some way to do that.) Using this information it is possible for userspace to scale the count it reads from the counter to get an estimate of the true count: true_count_estimate = count * total_time_enabled / total_time_running This also lets userspace detect the situation where the counter never got to go on the cpu: total_time_running == 0. This functionality has been requested by the PAPI developers, and will be generally needed for interpreting the count values from counting counters correctly. In the implementation, this keeps 5 time values (in nanoseconds) for each counter: total_time_enabled and total_time_running are used when the counter is in state OFF or ERROR and for reporting back to userspace. When the counter is in state INACTIVE or ACTIVE, it is the tstamp_enabled, tstamp_running and tstamp_stopped values that are relevant, and total_time_enabled and total_time_running are determined from them. (tstamp_stopped is only used in INACTIVE state.) The reason for doing it like this is that it means that only counters being enabled or disabled at sched-in and sched-out time need to be updated. There are no new loops that iterate over all counters to update total_time_enabled or total_time_running. This also keeps separate child_total_time_running and child_total_time_enabled fields that get added in when reporting the totals to userspace. They are separate fields so that they can be atomic. We don't want to use atomics for total_time_running, total_time_enabled etc., because then we would have to use atomic sequences to update them, which are slower than regular arithmetic and memory accesses. It is possible to measure total_time_running by adding a task_clock counter to each group of counters, and total_time_enabled can be measured approximately with a top-level task_clock counter (though inaccuracies will creep in if you need to disable and enable groups since it is not possible in general to disable/enable the top-level task_clock counter simultaneously with another group). However, that adds extra overhead - I measured around 15% increase in the context switch latency reported by lat_ctx (from lmbench) when a task_clock counter was added to each of 2 groups, and around 25% increase when a task_clock counter was added to each of 4 groups. (In both cases a top-level task-clock counter was also added.) In contrast, the code added in this commit gives better information with no overhead that I could measure (in fact in some cases I measured lower times with this code, but the differences were all less than one standard deviation). [ v2: address review comments by Andrew Morton. ] Signed-off-by: NPaul Mackerras <paulus@samba.org> Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Andrew Morton <akpm@linux-foundation.org> Orig-LKML-Reference: <18890.6578.728637.139402@cargo.ozlabs.ibm.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Impact: Rework the perfcounter output ABI use sys_read() only for instant data and provide mmap() output for all async overflow data. The first mmap() determines the size of the output buffer. The mmap() size must be a PAGE_SIZE multiple of 1+pages, where pages must be a power of 2 or 0. Further mmap()s of the same fd must have the same size. Once all maps are gone, you can again mmap() with a new size. In case of 0 extra pages there is no data output and the first page only contains meta data. When there are data pages, a poll() event will be generated for each full page of data. Furthermore, the output is circular. This means that although 1 page is a valid configuration, its useless, since we'll start overwriting it the instant we report a full page. Future work will focus on the output format (currently maintained) where we'll likey want each entry denoted by a header which includes a type and length. Further future work will allow to splice() the fd, also containing the async overflow data -- splice() would be mutually exclusive with mmap() of the data. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Orig-LKML-Reference: <20090323172417.470536358@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Paul Mackerras 提交于
Impact: new feature giving performance improvement This adds the ability for userspace to do an mmap on a hardware counter fd and get access to a read-only page that contains the information needed to translate a hardware counter value to the full 64-bit counter value that would be returned by a read on the fd. This is useful on architectures that allow user programs to read the hardware counters, such as PowerPC. The mmap will only succeed if the counter is a hardware counter monitoring the current process. On my quad 2.5GHz PowerPC 970MP machine, userspace can read a counter and translate it to the full 64-bit value in about 30ns using the mmapped page, compared to about 830ns for the read syscall on the counter, so this does give a significant performance improvement. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Orig-LKML-Reference: <20090323172417.297057964@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Since the bitfields turned into a bit of a mess, remove them and rely on good old masks. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Orig-LKML-Reference: <20090323172417.059499915@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Paul Mackerras 提交于
Impact: build fix for powerpc Commit db3a944aca35ae61 ("perf_counter: revamp syscall input ABI") expanded the hw_event.type field into a union of structs containing bitfields. In particular it introduced a type field and a raw_type field, with the intention that the 1-bit raw_type field should overlay the most-significant bit of the 8-bit type field, and in fact perf_counter_alloc() now assumes that (or at least, assumes that raw_type doesn't overlay any of the bits that are 1 in the values of PERF_TYPE_{HARDWARE,SOFTWARE,TRACEPOINT}). Unfortunately this is not true on big-endian systems such as PowerPC, where bitfields are laid out from left to right, i.e. from most significant bit to least significant. This means that setting hw_event.type = PERF_TYPE_SOFTWARE will set hw_event.raw_type to 1. This fixes it by making the layout depend on whether or not __BIG_ENDIAN_BITFIELD is defined. It's a bit ugly, but that's what we get for using bitfields in a user/kernel ABI. Also, that commit didn't fix up some places in arch/powerpc/kernel/ perf_counter.c where hw_event.raw and hw_event.event_id were used. This fixes them too. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Paul Mackerras 提交于
Impact: cleanup This updates the powerpc perf_counter_interrupt following on from the "perf_counter: unify irq output code" patch. Since we now use the generic perf_counter_output code, which sets the perf_counter_pending flag directly, we no longer need the need_wakeup variable. This removes need_wakeup and makes perf_counter_interrupt use get_perf_counter_pending() instead. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Steven Rostedt <rostedt@goodmis.org> Orig-LKML-Reference: <20090319194234.024464535@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Impact: cleanup Having 3 slightly different copies of the same code around does nobody any good. First step in revamping the output format. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Steven Rostedt <rostedt@goodmis.org> Orig-LKML-Reference: <20090319194233.929962222@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Impact: modify ABI The hardware/software classification in hw_event->type became a little strained due to the addition of tracepoint tracing. Instead split up the field and provide a type field to explicitly specify the counter type, while using the event_id field to specify which event to use. Raw counters still work as before, only the raw config now goes into raw_event. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Paul Mackerras <paulus@samba.org> Cc: Steven Rostedt <rostedt@goodmis.org> Orig-LKML-Reference: <20090319194233.836807573@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Paul Mackerras 提交于
Impact: build fix for powerpc Commit bd753921015e7905 ("perf_counter: software counter event infrastructure") introduced a use of TIF_PERF_COUNTERS into the core perfcounter code. This breaks the build on powerpc because we use a flag in a per-cpu area to signal wakeups on powerpc rather than a thread_info flag, because the thread_info flags have to be manipulated with atomic operations and are thus slower than per-cpu flags. This fixes the by changing the core to use an abstracted set_perf_counter_pending() function, which is defined on x86 to set the TIF_PERF_COUNTERS flag and on powerpc to set the per-cpu flag (paca->perf_counter_pending). It changes the previous powerpc definition of set_perf_counter_pending to not take an argument and adds a clear_perf_counter_pending, so as to simplify the definition on x86. On x86, set_perf_counter_pending() is defined as a macro. Defining it as a static inline in arch/x86/include/asm/perf_counters.h causes compile failures because <asm/perf_counters.h> gets included early in <linux/sched.h>, and the definitions of set_tsk_thread_flag etc. are therefore not available in <asm/perf_counters.h>. (On powerpc this problem is avoided by defining set_perf_counter_pending etc. in <asm/hw_irq.h>.) Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Ingo Molnar 提交于
Impact: fix boot crash on Intel Perfmon Version 1 systems Intel Perfmon v1 does not support the global MSRs, nor does it offer the generalized MSR ranges. So support v2 and later CPUs only. Also mark pmc_ops as read-mostly - to avoid false cacheline sharing. Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Provide separate sw counters for major and minor page faults. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
We use the generic software counter infrastructure to provide page fault events. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Fix a build warning on 32bit machines by explicitly marking the constants as 64-bit. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
We need to ensure the enabled=0 write happens before we start disabling the actual counters, so that a pcm_amd_enable() will not enable one underneath us. I think the race is impossible anyway, we always balance the ops within any one context and perform enable() with IRQs disabled. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
No need to assume the irq_period is 32bit. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 05 4月, 2009 1 次提交
-
-
由 Philipp Zabel 提交于
The PASIC3 driver now calculates its register spacing from the resource size. Signed-off-by: NPhilipp Zabel <philipp.zabel@gmail.com> Signed-off-by: NSamuel Ortiz <sameo@openedhand.com>
-
- 04 4月, 2009 10 次提交
-
-
由 Takashi Yoshii 提交于
PCI still doesn't work on sh7785lcr 29bit 256M map mode. On SH7785, PCI -> SHwy address translation is not base+offset but somewhat like base|offset (See HW Manual (rej09b0261) Fig. 13.11). So, you can't export CS2,3,4,5 by 256M at CS2 (results CS0,1,2,3 exported, I guess). There are two candidates. a) 128M@CS2 + 128M@CS4 b) 512M@CS0 Attached patch is B. It maps 512M Byte at 0 independently of memory size. It results CS0 to CS6 and perhaps some more being accessible from PCI. Tested on 7785lcr 29bit 128M map 7785lcr 29bit 256M map (NOT tested on 32bit) Signed-off-by: NTakashi YOSHII <yoshii.takashi@renesas.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Michael Trimarchi 提交于
There were a number of issues with the DSP context save/restore code, mostly left-over relics from when it was introduced on SH3-DSP with little follow-up testing, resulting in things like task_pt_dspregs() referencing incorrect state on the stack. This follows the MIPS convention of tracking the DSP state in the thread_struct and handling the state save/restore in switch_to() and finish_arch_switch() respectively. The regset interface is also updated, which allows us to finally be rid of task_pt_dspregs() and the special cased task_pt_regs(). Signed-off-by: NMichael Trimarchi <michael@evidence.eu.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
This accidentally regressed when the multi-IRQ changes went in, switching SH7091 from 4 to 6 channels. Add SH7091 back in to the 4-channel dependency list. Reported-by: NAdrian McMenamin <adrian@mcmen.demon.co.uk> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Suresh Siddha 提交于
All logical processors with APIC ID values of 255 and greater will have their APIC reported through Processor X2APIC structure (type-9 entry type) and all logical processors with APIC ID less than 255 will have their APIC reported through legacy Processor Local APIC (type-0 entry type) only. This is the same case even for NMI structure reporting. The Processor X2APIC Affinity structure provides the association between the X2APIC ID of a logical processor and the proximity domain to which the logical processor belongs. For OSPM, Procssor IDs outside the 0-254 range are to be declared as Device() objects in the ACPI namespace. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
由 Ingo Molnar 提交于
The MTRR code grew a new debug message which triggers commonly: [ 40.142276] get_mtrr: cpu0 reg00 base=0000000000 size=0000080000 write-back [ 40.142280] get_mtrr: cpu0 reg01 base=0000080000 size=0000040000 write-back [ 40.142284] get_mtrr: cpu0 reg02 base=0000100000 size=0000040000 write-back [ 40.142311] get_mtrr: cpu0 reg00 base=0000000000 size=0000080000 write-back [ 40.142314] get_mtrr: cpu0 reg01 base=0000080000 size=0000040000 write-back [ 40.142317] get_mtrr: cpu0 reg02 base=0000100000 size=0000040000 write-back Remove this annoyance. Reported-by: NLinus Torvalds <torvalds@linux-foundation.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Suresh Siddha 提交于
pci mmap code was doing memtype reserve for a while now. Recently we added memtype tracking in remap_pfn_range, and pci code indirectly calls remap_pfn_range. So, we don't need seperate tracking in pci code anymore. Which means a patch that removes ~50 lines of code :-). Also, recently we found out that the pci tracking is not working as we expect it to work in some cases. Specifically, userlevel X mmap of pci, with some recent version of X, is having a problem with vm_page_prot getting reset. The pci tracking uses vm_page_prot to pass on the protection type from parent to child during fork. a) Parent does a pci mmap b) We look at PAT and get either UC_MINUS or WC mapping for parent c) Store that mapping type in vma vm_page_prot for future use d) This thread does a fork e) Fork results in mmap_ops ->open for the child process f) We get the vm_page_prot from vma and reserve that type for the child process But, between c) and e) above, the vma vm_page_prot is getting reset to zero. This results in PAT reserve failing at the time of fork as in here. http://marc.info/?l=linux-kernel&m=123858163103240&w=2 This cleanup makes the above problem go away as we do not depend on vm_page_prot in our PAT code anymore. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Joseph Cihula 提交于
The __restore_processor_state() fn restores %gs on resume from S3. As such, it cannot be protected by the stack-protector guard since %gs will not be correct on function entry. There are only a few other fns in this file and it should not negatively impact kernel security that they will also have the stack-protector guard removed (and so it's not worth moving them to another file). Without this change, S3 resume on a kernel built with CONFIG_CC_STACKPROTECTOR_ALL=y will fail. Signed-off-by: NJoseph Cihula <joseph.cihula@intel.com> Tested-by: NChris Wright <chrisw@sous-sol.org> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Tejun Heo <tj@kernel.org> LKML-Reference: <49D13385.5060900@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Akinobu Mita 提交于
Commit 7ca43e75 ("mm: use debug_kmap_atomic") introduced some debug_kmap_atomic() in wrong places. Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrew Morton 提交于
i386 allnoconfig: arch/x86/mm/iomap_32.c: In function 'is_io_mapping_possible': arch/x86/mm/iomap_32.c:27: warning: comparison is always false due to limited range of data type Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Zhang Rui 提交于
update ACPI Development Discussion List Signed-off-by: NZhang Rui <rui.zhang@intel.com> Signed-off-by: NLen Brown <len.brown@intel.com>
-
- 03 4月, 2009 7 次提交
-
-
由 H. Peter Anvin 提交于
Impact: code size reduction (possibly critical) The x86 boot and decompression code has no use of the branch profiling constructs, so disable them. This would bloat the setup code by as much as 14K, eating up a fairly large chunk of the 32K area we are guaranteed to have. Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Joerg Roedel 提交于
Impact: unification of pci-dma macros and pci_32.h removal This patch unifies the definition of the pci_unmap_addr*, pci_unmap_len* and DECLARE_PCI_UNMAP* macros. This makes sense because the pci_unmap functions are no longer no-ops anymore when the kernel runs with CONFIG_DMA_API_DEBUG. Without an iommu or DMA_API_DEBUG it is a no-op on 32 bit because the dma mapping path returns a physical address and therefore the dma-api implementation has no internal state which needs to be destroyed with an unmap call. This unification also simplifies the port of x86_64 iommu drivers to 32 bit x86 and let us get rid of pci_32.h. Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> Acked-by: NStephen Hemminger <shemminger@vyatta.com>
-
由 Chris Zankel 提交于
Remove include statement to include asm/io.h. Signed-off-by: NChris Zankel <chris@zankel.net>
-
由 Chris Zankel 提交于
We only add the platform or variant directory to core-y if it contains a Makefile. Consequently, we can remove the Makefiles for the dc232b and fsf processor variants. Signed-off-by: NChris Zankel <chris@zankel.net>
-
由 Daniel Glöckner 提交于
Move it from .text to .init.text to get rid of it after boot and prevent illegal section references. Signed-off-by: NDaniel Glöckner <dg@emlix.com> Signed-off-by: NChris Zankel <chris@zankel.net>
-
由 Johannes Weiner 提交于
Switch to GENERIC_TIME by using the ccount register as a clock source. Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Signed-off-by: NChris Zankel <chris@zankel.net>
-
由 Johannes Weiner 提交于
platform_get/set_rtc_time() is not implemented by any of the supported xtensa platforms. Remove the facility completely. The initial seconds for xtime come from read_persistent_clock() which returns just 0 in the generic implementation. Platforms that sport a persistent clock can implement this function. This is needed to implement the ccount as a clock source. Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Signed-off-by: NChris Zankel <chris@zankel.net>
-