- 09 2月, 2015 2 次提交
-
-
由 Sylwester Nawrocki 提交于
Now when the CDCLK I2S output clock can be handled through the clock API the Odroid X2/U3 can be switched to the simple-audio-card DT binding. Signed-off-by: NSylwester Nawrocki <s.nawrocki@samsung.com> Signed-off-by: NMark Brown <broonie@kernel.org>
-
由 Sylwester Nawrocki 提交于
Clock related properties are added to the Exynos4 I2S device nodes so they can be referred to as clock providers. Missing i2s_opclk1 clock is added to the I2S0 node and clock properties are added to the MAX98090 codec node to allow it to control/read frequency of the MCLK clock directly. Signed-off-by: NSylwester Nawrocki <s.nawrocki@samsung.com> Signed-off-by: NMark Brown <broonie@kernel.org>
-
- 20 12月, 2014 15 次提交
-
-
由 Jesper Nilsson 提交于
There are no users of this symbol left. Reported-by: NPaul Bolle <pebolle@tiscali.nl> Signed-off-by: NJesper Nilsson <jesper.nilsson@axis.com>
-
由 Jesper Nilsson 提交于
Move pinmux alloc/dealloc code into functions that don't take the spinlock so we can use from code that has the spinlock already. CRISv32 has no working SMP, so spinlocks becomes a NOP, so deadlock was never seen. Signed-off-by: NJesper Nilsson <jesper.nilsson@axis.com>
-
由 Jesper Nilsson 提交于
Fixes compile error on allmodconfig. Signed-off-by: NJesper Nilsson <jesper.nilsson@axis.com>
-
由 Jesper Nilsson 提交于
Signed-off-by: NJesper Nilsson <jesper.nilsson@axis.com>
-
由 Jesper Nilsson 提交于
Also, print kernel version on oops. Signed-off-by: NJesper Nilsson <jesper.nilsson@axis.com>
-
由 Jesper Nilsson 提交于
Drop i2c_init from this header, it was declared non-static here, but static in the C-file. Signed-off-by: NJesper Nilsson <jesper.nilsson@axis.com>
-
由 Jesper Nilsson 提交于
Make driver possible to load as a module and try to handle locking better. Signed-off-by: NJesper Nilsson <jesper.nilsson@axis.com>
-
由 Jesper Nilsson 提交于
- Add free_initrd_mem as found by Guenter Roeck <linux@roeck-us.net> - Add free_init_pages - Export empty_zero_page symbol Signed-off-by: NJesper Nilsson <jesper.nilsson@axis.com>
-
由 Jesper Nilsson 提交于
Don't enter watchdog handling if we're already in watchdog handling. Also some minor formatting tweaks. Signed-off-by: NJesper Nilsson <jesper.nilsson@axis.com>
-
由 Jesper Nilsson 提交于
strcmp was lost when all other string functions were removed, but we still have an optimized version for this on CRISv32, so any driver built as a module would not have access to this symbol. In a similar manner, we had optimized versions of csum_partial_copy_from_user and __do_clear_user but no exported symbols for them, breaking bunch of other drivers when built as a module. At the same time, move EXPORT_SYMBOL(__copy_user) and EXPORT_SYMBOL(__copy_user_zeroing) C-files so it's located together with the function definition. Signed-off-by: NJesper Nilsson <jesper.nilsson@axis.com>
-
由 Jesper Nilsson 提交于
Signed-off-by: NJesper Nilsson <jesper.nilsson@axis.com>
-
由 Sam Ravnborg 提交于
Fix headers_install by adjusting the path to arch files. And delete unused Kbuild file. Drop special handling of cris in the headers.sh script as a nice side-effect. Signed-off-by: NSam Ravnborg <sam@ravnborg.org> Cc: Mikael Starvik <starvik@axis.com> Cc: David Howells <dhowells@redhat.com> Signed-off-by: NJesper Nilsson <jesper.nilsson@axis.com>
-
由 Jesper Nilsson 提交于
Fixes the following compile error. arch/cris/arch-v32/kernel/time.c: In function 'reset_watchdog': arch/cris/arch-v32/kernel/time.c:121:2: error: implicit declaration of function 'global_page_state' Signed-off-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NJesper Nilsson <jesper.nilsson@axis.com>
-
由 Jesper Nilsson 提交于
File was already deleted. Signed-off-by: NJesper Nilsson <jesper.nilsson@axis.com>
-
由 Rafael J. Wysocki 提交于
Having switched over all of the users of CONFIG_PM_RUNTIME to use CONFIG_PM directly, turn the latter into a user-selectable option and drop the former entirely from the tree. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NUlf Hansson <ulf.hansson@linaro.org> Acked-by: NKevin Hilman <khilman@linaro.org>
-
- 19 12月, 2014 2 次提交
-
-
由 Alexander Graf 提交于
Commit 69111bac ("powerpc: Replace __get_cpu_var uses") introduced compile breakage to the e500 target by introducing invalid automatically created C syntax. Fix up the breakage and make the code compile again. Signed-off-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andreas Larsson 提交于
Load balancing can be triggered in the critical sections protected by srmmu_context_spinlock in destroy_context() and switch_mm() and can hang the cpu waiting for the rq lock of another cpu that in turn has called switch_mm hangning on srmmu_context_spinlock leading to deadlock. So, disable interrupt while taking srmmu_context_spinlock in destroy_context() and switch_mm() so we don't deadlock. See also commit 77b838fa ("[SPARC64]: destroy_context() needs to disable interrupts.") Signed-off-by: NAndreas Larsson <andreas@gaisler.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 12月, 2014 12 次提交
-
-
由 Andy Lutomirski 提交于
It turns out that there's a lurking ABI issue. GCC, when compiling this in a 32-bit program: struct user_desc desc = { .entry_number = idx, .base_addr = base, .limit = 0xfffff, .seg_32bit = 1, .contents = 0, /* Data, grow-up */ .read_exec_only = 0, .limit_in_pages = 1, .seg_not_present = 0, .useable = 0, }; will leave .lm uninitialized. This means that anything in the kernel that reads user_desc.lm for 32-bit tasks is unreliable. Revert the .lm check in set_thread_area(). The value never did anything in the first place. Fixes: 0e58af4e ("x86/tls: Disallow unusual TLS segments") Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Acked-by: NThomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org # Only if 0e58af4e is backported Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/d7875b60e28c512f6a6fc0baf5714d58e7eaadbb.1418856405.git.luto@amacapital.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Greg Kurz 提交于
Starting with POWER8, the subcore logic relies on all threads of a core being booted so that they can participate in split mode switches. So on those machines we ignore the smt_enabled_at_boot setting (smt-enabled on the kernel command line). Signed-off-by: NGreg Kurz <gkurz@linux.vnet.ibm.com> [mpe: Update comment and change log to be more precise] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christian Borntraeger 提交于
ACCESS_ONCE does not work reliably on non-scalar types. For example gcc 4.6 and 4.7 might remove the volatile tag for such accesses during the SRA (scalar replacement of aggregates) step (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145) Commit 1365039d ("KVM: s390: Fix ipte locking") replace ACCESS_ONCE with barriers. Lets use READ_ONCE instead. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Christian Borntraeger 提交于
ACCESS_ONCE does not work reliably on non-scalar types. For example gcc 4.6 and 4.7 might remove the volatile tag for such accesses during the SRA (scalar replacement of aggregates) step (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145) Change the spinlock code to replace ACCESS_ONCE with READ_ONCE. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Christian Borntraeger 提交于
ACCESS_ONCE does not work reliably on non-scalar types. For example gcc 4.6 and 4.7 might remove the volatile tag for such accesses during the SRA (scalar replacement of aggregates) step (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145) Change the spinlock code to replace ACCESS_ONCE with READ_ONCE. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Christian Borntraeger 提交于
ACCESS_ONCE does not work reliably on non-scalar types. For example gcc 4.6 and 4.7 might remove the volatile tag for such accesses during the SRA (scalar replacement of aggregates) step https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145) Change the gup code to replace ACCESS_ONCE with READ_ONCE. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Christian Borntraeger 提交于
ACCESS_ONCE does not work reliably on non-scalar types. For example gcc 4.6 and 4.7 might remove the volatile tag for such accesses during the SRA (scalar replacement of aggregates) step (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145) Change the gup code to replace ACCESS_ONCE with READ_ONCE. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Christian Borntraeger 提交于
ACCESS_ONCE does not work reliably on non-scalar types. For example gcc 4.6 and 4.7 might remove the volatile tag for such accesses during the SRA (scalar replacement of aggregates) step (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145) Change the spinlock code to replace ACCESS_ONCE with READ_ONCE. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
-
由 Paolo Bonzini 提交于
They are not used anymore by IA64, move them away. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Michael S. Tsirkin 提交于
At the moment, if p and x are both of the same bitwise type (eg. __le32), get_user(x, p) produces a sparse warning. This is because *p is loaded into a long then cast back to typeof(*p). When typeof(*p) is a bitwise type (which is uncommon), such a cast needs __force, otherwise sparse produces a warning. For non-bitwise types __force should have no effect, and should not hide any legitimate errors. Note that we are casting to typeof(*p) not typeof(x). Even with the cast, if x and *p are of different types we should get the warning, so I think we are not loosing the ability to detect any actual errors. virtio would like to use bitwise types with get_user() so fix these spurious warnings by adding __force. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> [mpe: Fill in changelog with more details] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anton Blanchard 提交于
The in-kernel XICS emulation is faster than doing it all in QEMU and it has got a lot of testing, so enable it by default. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Linus Torvalds 提交于
My commit 26178ec1 ("x86: mm: consolidate VM_FAULT_RETRY handling") had a really stupid typo: the FAULT_FLAG_USER bit is in the 'flags' variable, not the 'fault' variable. Duh, The one silver lining in this is that Dave finding this at least confirms that trinity actually triggers this special path easily, in a way normal use does not. Reported-by: NDave Jones <davej@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 12月, 2014 9 次提交
-
-
由 Sam Bobroff 提交于
Currently the H_CONFER hcall is implemented in kernel virtual mode, meaning that whenever a guest thread does an H_CONFER, all the threads in that virtual core have to exit the guest. This is bad for performance because it interrupts the other threads even if they are doing useful work. The H_CONFER hcall is called by a guest VCPU when it is spinning on a spinlock and it detects that the spinlock is held by a guest VCPU that is currently not running on a physical CPU. The idea is to give this VCPU's time slice to the holder VCPU so that it can make progress towards releasing the lock. To avoid having the other threads exit the guest unnecessarily, we add a real-mode implementation of H_CONFER that checks whether the other threads are doing anything. If all the other threads are idle (i.e. in H_CEDE) or trying to confer (i.e. in H_CONFER), it returns H_TOO_HARD which causes a guest exit and allows the H_CONFER to be handled in virtual mode. Otherwise it spins for a short time (up to 10 microseconds) to give other threads the chance to observe that this thread is trying to confer. The spin loop also terminates when any thread exits the guest or when all other threads are idle or trying to confer. If the timeout is reached, the H_CONFER returns H_SUCCESS. In this case the guest VCPU will recheck the spinlock word and most likely call H_CONFER again. This also improves the implementation of the H_CONFER virtual mode handler. If the VCPU is part of a virtual core (vcore) which is runnable, there will be a 'runner' VCPU which has taken responsibility for running the vcore. In this case we yield to the runner VCPU rather than the target VCPU. We also introduce a check on the target VCPU's yield count: if it differs from the yield count passed to H_CONFER, the target VCPU has run since H_CONFER was called and may have already released the lock. This check is required by PAPR. Signed-off-by: NSam Bobroff <sam.bobroff@au1.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Paul Mackerras 提交于
There are two ways in which a guest instruction can be obtained from the guest in the guest exit code in book3s_hv_rmhandlers.S. If the exit was caused by a Hypervisor Emulation interrupt (i.e. an illegal instruction), the offending instruction is in the HEIR register (Hypervisor Emulation Instruction Register). If the exit was caused by a load or store to an emulated MMIO device, we load the instruction from the guest by turning data relocation on and loading the instruction with an lwz instruction. Unfortunately, in the case where the guest has opposite endianness to the host, these two methods give results of different endianness, but both get put into vcpu->arch.last_inst. The HEIR value has been loaded using guest endianness, whereas the lwz will load the instruction using host endianness. The rest of the code that uses vcpu->arch.last_inst assumes it was loaded using host endianness. To fix this, we define a new vcpu field to store the HEIR value. Then, in kvmppc_handle_exit_hv(), we transfer the value from this new field to vcpu->arch.last_inst, doing a byte-swap if the guest and host endianness differ. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Paul Mackerras 提交于
This removes the code that was added to enable HV KVM to work on PPC970 processors. The PPC970 is an old CPU that doesn't support virtualizing guest memory. Removing PPC970 support also lets us remove the code for allocating and managing contiguous real-mode areas, the code for the !kvm->arch.using_mmu_notifiers case, the code for pinning pages of guest memory when first accessed and keeping track of which pages have been pinned, and the code for handling H_ENTER hypercalls in virtual mode. Book3S HV KVM is now supported only on POWER7 and POWER8 processors. The KVM_CAP_PPC_RMA capability now always returns 0. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Suresh E. Warrier 提交于
This patch adds trace points in the guest entry and exit code and also for exceptions handled by the host in kernel mode - hypercalls and page faults. The new events are added to /sys/kernel/debug/tracing/events under a new subsystem called kvm_hv. Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NSuresh Warrier <warrier@linux.vnet.ibm.com> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Paul Mackerras 提交于
Currently the calculations of stolen time for PPC Book3S HV guests uses fields in both the vcpu struct and the kvmppc_vcore struct. The fields in the kvmppc_vcore struct are protected by the vcpu->arch.tbacct_lock of the vcpu that has taken responsibility for running the virtual core. This works correctly but confuses lockdep, because it sees that the code takes the tbacct_lock for a vcpu in kvmppc_remove_runnable() and then takes another vcpu's tbacct_lock in vcore_stolen_time(), and it thinks there is a possibility of deadlock, causing it to print reports like this: ============================================= [ INFO: possible recursive locking detected ] 3.18.0-rc7-kvm-00016-g8db4bc6 #89 Not tainted --------------------------------------------- qemu-system-ppc/6188 is trying to acquire lock: (&(&vcpu->arch.tbacct_lock)->rlock){......}, at: [<d00000000ecb1fe8>] .vcore_stolen_time+0x48/0xd0 [kvm_hv] but task is already holding lock: (&(&vcpu->arch.tbacct_lock)->rlock){......}, at: [<d00000000ecb25a0>] .kvmppc_remove_runnable.part.3+0x30/0xd0 [kvm_hv] other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&(&vcpu->arch.tbacct_lock)->rlock); lock(&(&vcpu->arch.tbacct_lock)->rlock); *** DEADLOCK *** May be due to missing lock nesting notation 3 locks held by qemu-system-ppc/6188: #0: (&vcpu->mutex){+.+.+.}, at: [<d00000000eb93f98>] .vcpu_load+0x28/0xe0 [kvm] #1: (&(&vcore->lock)->rlock){+.+...}, at: [<d00000000ecb41b0>] .kvmppc_vcpu_run_hv+0x530/0x1530 [kvm_hv] #2: (&(&vcpu->arch.tbacct_lock)->rlock){......}, at: [<d00000000ecb25a0>] .kvmppc_remove_runnable.part.3+0x30/0xd0 [kvm_hv] stack backtrace: CPU: 40 PID: 6188 Comm: qemu-system-ppc Not tainted 3.18.0-rc7-kvm-00016-g8db4bc6 #89 Call Trace: [c000000b2754f3f0] [c000000000b31b6c] .dump_stack+0x88/0xb4 (unreliable) [c000000b2754f470] [c0000000000faeb8] .__lock_acquire+0x1878/0x2190 [c000000b2754f600] [c0000000000fbf0c] .lock_acquire+0xcc/0x1a0 [c000000b2754f6d0] [c000000000b2954c] ._raw_spin_lock_irq+0x4c/0x70 [c000000b2754f760] [d00000000ecb1fe8] .vcore_stolen_time+0x48/0xd0 [kvm_hv] [c000000b2754f7f0] [d00000000ecb25b4] .kvmppc_remove_runnable.part.3+0x44/0xd0 [kvm_hv] [c000000b2754f880] [d00000000ecb43ec] .kvmppc_vcpu_run_hv+0x76c/0x1530 [kvm_hv] [c000000b2754f9f0] [d00000000eb9f46c] .kvmppc_vcpu_run+0x2c/0x40 [kvm] [c000000b2754fa60] [d00000000eb9c9a4] .kvm_arch_vcpu_ioctl_run+0x54/0x160 [kvm] [c000000b2754faf0] [d00000000eb94538] .kvm_vcpu_ioctl+0x498/0x760 [kvm] [c000000b2754fcb0] [c000000000267eb4] .do_vfs_ioctl+0x444/0x770 [c000000b2754fd90] [c0000000002682a4] .SyS_ioctl+0xc4/0xe0 [c000000b2754fe30] [c0000000000092e4] syscall_exit+0x0/0x98 In order to make the locking easier to analyse, we change the code to use a spinlock in the kvmppc_vcore struct to protect the stolen_tb and preempt_tb fields. This lock needs to be an irq-safe lock since it is used in the kvmppc_core_vcpu_load_hv() and kvmppc_core_vcpu_put_hv() functions, which are called with the scheduler rq lock held, which is an irq-safe lock. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Rickard Strandqvist 提交于
Remove the function inst_set_field() that is not used anywhere. This was partially found by using a static code analysis program called cppcheck. Signed-off-by: NRickard Strandqvist <rickard_strandqvist@spectrumdigital.se> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Rickard Strandqvist 提交于
Remove the function get_fpr_index() that is not used anywhere. This was partially found by using a static code analysis program called cppcheck. Signed-off-by: NRickard Strandqvist <rickard_strandqvist@spectrumdigital.se> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Rickard Strandqvist 提交于
Removes some functions that are not used anywhere: kvmppc_core_load_guest_debugstate() kvmppc_core_load_host_debugstate() This was partially found by using a static code analysis program called cppcheck. Signed-off-by: NRickard Strandqvist <rickard_strandqvist@spectrumdigital.se> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Rickard Strandqvist 提交于
Remove the function sr_nx() that is not used anywhere. This was partially found by using a static code analysis program called cppcheck. Signed-off-by: NRickard Strandqvist <rickard_strandqvist@spectrumdigital.se> Signed-off-by: NAlexander Graf <agraf@suse.de>
-