- 19 7月, 2014 11 次提交
-
-
由 Steven Rostedt (Red Hat) 提交于
Nothing sets function_trace_stop to disable function tracing anymore. Remove the check for it in the arch code. Cc: Ralf Baechle <ralf@linux-mips.org> Tested-by: NJames Hogan <james.hogan@imgtec.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
Nothing sets function_trace_stop to disable function tracing anymore. Remove the check for it in the arch code. Link: http://lkml.kernel.org/r/53B08317.7010501@gmx.de Cc: Kyle McMartin <kyle@mcmartin.ca> Acked-by: NHelge Deller <deller@gmx.de> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
Nothing sets function_trace_stop to disable function tracing anymore. Remove the check for it in the arch code. [ Please test this on your arch ] Cc: Matt Fleming <matt@console-pimps.org> Cc: Paul Mundt <lethal@linux-sh.org> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
Nothing sets function_trace_stop to disable function tracing anymore. Remove the check for it in the arch code. Link: http://lkml.kernel.org/r/20140703.211820.1674895115102216877.davem@davemloft.net Cc: David S. Miller <davem@davemloft.net> OKed-to-go-through-tracing-tree-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
Nothing sets function_trace_stop to disable function tracing anymore. Remove the check for it in the arch code. Cc: Chris Metcalf <cmetcalf@tilera.com> Acked-by: Zhigang Lu<zlu@tilera.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
Nothing sets function_trace_stop to disable function tracing anymore. Remove the check for it in the arch code. Link: http://lkml.kernel.org/r/53C54D32.6000000@zytor.comAcked-by: NH. Peter Anvin <hpa@linux.intel.com> Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
ftrace_stop() is going away as it disables parts of function tracing that affects users that should not be affected. But ftrace_graph_stop() is built on ftrace_stop(). Here's another example of killing all of function tracing because something went wrong with function graph tracing. Instead of disabling all users of function tracing on function graph error, disable only function graph tracing. To do this, the arch code must call ftrace_graph_is_dead() before it implements function graph. Cc: Paul Mundt <lethal@linux-sh.org> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
ftrace_stop() is going away as it disables parts of function tracing that affects users that should not be affected. But ftrace_graph_stop() is built on ftrace_stop(). Here's another example of killing all of function tracing because something went wrong with function graph tracing. Instead of disabling all users of function tracing on function graph error, disable only function graph tracing. To do this, the arch code must call ftrace_graph_is_dead() before it implements function graph. Cc: Anton Blanchard <anton@samba.org> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
ftrace_stop() is going away as it disables parts of function tracing that affects users that should not be affected. But ftrace_graph_stop() is built on ftrace_stop(). Here's another example of killing all of function tracing because something went wrong with function graph tracing. Instead of disabling all users of function tracing on function graph error, disable only function graph tracing. To do this, the arch code must call ftrace_graph_is_dead() before it implements function graph. Link: http://lkml.kernel.org/r/53B08317.7010501@gmx.de Cc: Kyle McMartin <kyle@mcmartin.ca> Acked-by: NHelge Deller <deller@gmx.de> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
ftrace_stop() is going away as it disables parts of function tracing that affects users that should not be affected. But ftrace_graph_stop() is built on ftrace_stop(). Here's another example of killing all of function tracing because something went wrong with function graph tracing. Instead of disabling all users of function tracing on function graph error, disable only function graph tracing. To do this, the arch code must call ftrace_graph_is_dead() before it implements function graph. Cc: Ralf Baechle <ralf@linux-mips.org> Tested-by: NJames Hogan <james.hogan@imgtec.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
ftrace_stop() is going away as it disables parts of function tracing that affects users that should not be affected. But ftrace_graph_stop() is built on ftrace_stop(). Here's another example of killing all of function tracing because something went wrong with function graph tracing. Instead of disabling all users of function tracing on function graph error, disable only function graph tracing. To do this, the arch code must call ftrace_graph_is_dead() before it implements function graph. Link: http://lkml.kernel.org/r/53C8D874.9090601@monstr.euTested-by: NMichal Simek <monstr@monstr.eu> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 17 7月, 2014 3 次提交
-
-
由 Steven Rostedt (Red Hat) 提交于
ftrace_stop() is going away as it disables parts of function tracing that affects users that should not be affected. But ftrace_graph_stop() is built on ftrace_stop(). Here's another example of killing all of function tracing because something went wrong with function graph tracing. Instead of disabling all users of function tracing on function graph error, disable only function graph tracing. To do this, the arch code must call ftrace_graph_is_dead() before it implements function graph. Link: http://lkml.kernel.org/r/53C54D18.3020602@zytor.comAcked-by: NH. Peter Anvin <hpa@linux.intel.com> Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
ftrace_stop() is used to stop function tracing during suspend and resume which removes a lot of possible debugging opportunities with tracing. The reason was that some function in the resume path was causing a triple fault if it were to be traced. The issue I found was that doing something as simple as calling smp_processor_id() would reboot the box! When function tracing was first created I didn't have a good way to figure out what function was having issues, or it looked to be multiple ones. To fix it, we just created a big hammer approach to the problem which was to add a flag in the mcount trampoline that could be checked and not call the traced functions. Lately I developed better ways to find problem functions and I can bisect down to see what function is causing the issue. I removed the flag that stopped tracing and proceeded to find the problem function and it ended up being restore_processor_state(). This function makes sense as when the CPU comes back online from a suspend it calls this function to set up registers, amongst them the GS register, which stores things such as what CPU the processor is (if you call smp_processor_id() without this set up properly, it would fault). By making restore_processor_state() notrace, the system can suspend and resume without the need of the big hammer tracing to stop. Link: http://lkml.kernel.org/r/3577662.BSnUZfboWb@vostro.rjw.lanAcked-by: N"Rafael J. Wysocki" <rjw@rjwysocki.net> Reviewed-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
The function graph trampoline is called from the function trampoline and both do a save and restore of registers. The save of registers done by the function trampoline when only the function graph tracer is running is a waste of CPU cycles. As the function graph tracer trampoline in x86 is dependent from the function trampoline, we can call it directly when a function is only being traced by the function graph trampoline. Acked-by: NH. Peter Anvin <hpa@linux.intel.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 01 7月, 2014 2 次提交
-
-
由 Steven Rostedt (Red Hat) 提交于
There's several locations in the kernel that open code the calculation of the next location in the trace_seq buffer. This is usually done with p->buffer + p->len Instead of having this open coded, supply a helper function in the header to do it for them. This function is called trace_seq_buffer_ptr(). Link: http://lkml.kernel.org/p/20140626220129.452783019@goodmis.orgAcked-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
Function graph tracing is a bit different than the function tracers, as it is processed after either the ftrace_caller or ftrace_regs_caller and we only have one place to modify the jump to ftrace_graph_caller, the jump needs to happen after the restore of registeres. The function graph tracer is dependent on the function tracer, where even if the function graph tracing is going on by itself, the save and restore of registers is still done for function tracing regardless of if function tracing is happening, before it calls the function graph code. If there's no function tracing happening, it is possible to just call the function graph tracer directly, and avoid the wasted effort to save and restore regs for function tracing. This requires adding new flags to the dyn_ftrace records: FTRACE_FL_TRAMP FTRACE_FL_TRAMP_EN The first is set if the count for the record is one, and the ftrace_ops associated to that record has its own trampoline. That way the mcount code can call that trampoline directly. In the future, trampolines can be added to arbitrary ftrace_ops, where you can have two or more ftrace_ops registered to ftrace (like kprobes and perf) and if they are not tracing the same functions, then instead of doing a loop to check all registered ftrace_ops against their hashes, just call the ftrace_ops trampoline directly, which would call the registered ftrace_ops function directly. Without this patch perf showed: 0.05% hackbench [kernel.kallsyms] [k] ftrace_caller 0.05% hackbench [kernel.kallsyms] [k] arch_local_irq_save 0.05% hackbench [kernel.kallsyms] [k] native_sched_clock 0.04% hackbench [kernel.kallsyms] [k] __buffer_unlock_commit 0.04% hackbench [kernel.kallsyms] [k] preempt_trace 0.04% hackbench [kernel.kallsyms] [k] prepare_ftrace_return 0.04% hackbench [kernel.kallsyms] [k] __this_cpu_preempt_check 0.04% hackbench [kernel.kallsyms] [k] ftrace_graph_caller See that the ftrace_caller took up more time than the ftrace_graph_caller did. With this patch: 0.05% hackbench [kernel.kallsyms] [k] __buffer_unlock_commit 0.04% hackbench [kernel.kallsyms] [k] call_filter_check_discard 0.04% hackbench [kernel.kallsyms] [k] ftrace_graph_caller 0.04% hackbench [kernel.kallsyms] [k] sched_clock The ftrace_caller is no where to be found and ftrace_graph_caller still takes up the same percentage. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
- 29 6月, 2014 6 次提交
-
-
由 Will Deacon 提交于
On the syscall tracing path, we call out to secure_computing() to allow seccomp to check the syscall number being attempted. As part of this, a SIGTRAP may be sent to the tracer and the syscall could be re-written by a subsequent SET_SYSCALL ptrace request. Unfortunately, this new syscall is ignored by the current code unless TIF_SYSCALL_TRACE is also set on the current thread. This patch slightly reworks the enter path of the syscall tracing code so that we always reload the syscall number from current_thread_info()->syscall after the potential ptrace traps. Acked-by: NKees Cook <keescook@chromium.org> Tested-by: NKees Cook <keescook@chromium.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Laura Abbott 提交于
Commit 1c2f87c2 (ARM: 8025/1: Get rid of meminfo) changed find_limits to use memblock_get_current_limit for calculating the max_low pfn. nommu targets never actually set a limit on memblock though which means memblock_get_current_limit will just return the default value. Set the memblock_limit to be the end of DDR to make sure bounds are calculated correctly. Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Andrea Adami 提交于
The CFI mapping is now perfect so we can expose the top block, read only. There isn't much to read, though, just the sharpsl_params values. Signed-off-by: NAndrea Adami <andrea.adami@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Andrea Adami 提交于
Reverts commit d26b17ed ARM: sa1100: collie.c: fall back to jedec_probe flash detection Unfortunately the detection was challenged on the defective unit used for tests: one of the NOR chips did not respond to the CFI query. Moreover that bad device needed extra delays on erase-suspend/resume cycles. Tested personally on 3 different units and with feedback of two other users. Signed-off-by: NAndrea Adami <andrea.adami@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nicolas Pitre 提交于
The sync_phys variable has been replaced by link time computation in mcpm_head.S before the code was submitted upstream. Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Thomas Petazzoni 提交于
When a PL310 cache is used on a system that provides hardware coherency, the outer cache sync operation is useless, and can be skipped. Moreover, on some systems, it is harmful as it causes deadlocks between the Marvell coherency mechanism, the Marvell PCIe controller and the Cortex-A9. To avoid this, this commit introduces a new Device Tree property 'arm,io-coherent' for the L2 cache controller node, valid only for the PL310 cache. It identifies the usage of the PL310 cache in an I/O coherent configuration. Internally, it makes the driver disable the outer cache sync operation. Note that technically speaking, a fully coherent system wouldn't require any of the other .outer_cache operations. However, in practice, when booting secondary CPUs, these are not yet coherent, and therefore a set of cache maintenance operations are necessary at this point. This explains why we keep the other .outer_cache operations and only ->sync is disabled. While in theory any write to a PL310 register could cause the deadlock, in practice, disabling ->sync is sufficient to workaround the deadlock, since the other cache maintenance operations are only used in very specific situations. Contrary to previous versions of this patch, this new version does not simply NULL-ify the ->sync member, because the l2c_init_data structures are now 'const' and therefore cannot be modified, which is a good thing. Therefore, this patch introduces a separate l2c_init_data instance, called of_l2c310_coherent_data. Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 6月, 2014 18 次提交
-
-
由 Ralf Baechle 提交于
Kconfig doesn't select CRC32 so it's possible to build a Lasat kernel without CONFIG_CRC32 resulting in a build error: LD vmlinux arch/mips/built-in.o: In function `lasat_init_board_info': (.text+0x22c): undefined reference to `crc32_le' arch/mips/built-in.o: In function `lasat_write_eeprom_info': (.text+0x7fc): undefined reference to `crc32_le' make: *** [vmlinux] Error 1 Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
Previously, the lower limit for the MIPS SC initialization loop was set incorrectly allowing one extra loop leading to writes beyond the MSC ioremap'd space. More precisely, the value of the 'imp' in the last loop increased beyond the msc_irqmap_t boundaries and as a result of which, the 'n' variable was loaded with an incorrect value. This value was used later on to calculate the offset in the MSC01_IC_SUP which led to random crashes like the following one: CPU 0 Unable to handle kernel paging request at virtual address e75c0200, epc == 8058dba4, ra == 8058db90 [...] Call Trace: [<8058dba4>] init_msc_irqs+0x104/0x154 [<8058b5bc>] arch_init_irq+0xd8/0x154 [<805897b0>] start_kernel+0x220/0x36c Kernel panic - not syncing: Attempted to kill the idle task! This patch fixes the problem Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Reviewed-by: NJames Hogan <james.hogan@imgtec.com> Cc: stable@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7118/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
When allocating stack space for BPF memwords we need to use the appropriate 32 or 64-bit instruction to avoid losing the top 32 bits of the stack pointer. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Daniel Borkmann <dborkman@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: netdev@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7135/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
When loading a pointer to register we need to use the appropriate 32 or 64bit instruction to preserve the pointers' top 32bits. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: David S. Miller <davem@davemloft.net> Cc: Daniel Borkmann <dborkman@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: netdev@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7180/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
The skb->pkt_type field is defined as follows: u8 pkt_type:3, fclone:2, ipvs_property:1, peeked:1, nf_trace:1 resulting to the following layout in big-endian systems [pkt_type][fclone][ipvs_propery][peeked][nf_trace] ^ ^ | | LSB MSB As a result, the existing code did not work because it was trying to match pkt_type == 7 whereas in reality it is 7<<5 on big-endian systems. This has been fixed in the interpreter in 0dcceabb "net: filter: fix SKF_AD_PKTTYPE extension on big-endian" The fix is to look for 7<<5 on big-endian systems for the pkt_type field, and shift by 5 so the packet type will be at the lower 3 bits of the A register. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: David S. Miller <davem@davemloft.net> Cc: Daniel Borkmann <dborkman@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: netdev@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7132/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
Remove BUG_ON() if the shift immediate is >=32 to avoid kernel crashes due to malicious user input. If the shift immediate is >= 32, we simply load the destination register with 0 since only 32-bit instructions are used by JIT so this will do the correct thing even on MIPS64. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: David S. Miller <davem@davemloft.net> Cc: Daniel Borkmann <dborkman@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: netdev@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7179/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
Previously, update_on_xread() only set the reset flag if SEEN_X hasn't been set already. However, SEEN_X is used to indicate that X is used as destination or source register so there are some cases where X is only used as source register and we really need to make sure that it has been initialized in time. As a result of which, drop this function and always set X to zero if it's used in any of the opcodes. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: David S. Miller <davem@davemloft.net> Cc: Daniel Borkmann <dborkman@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: netdev@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7133/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
is_range() was meant to check whether the number is within the s16 range or not. However the return values and consumers expected the exact opposite. We fix that by inverting the logic in the function to return 'true' for < s16 and 'false' for > s16. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Reported-by: NAlexei Starovoitov <ast@plumgrid.com> Cc: David S. Miller <davem@davemloft.net> Cc: Daniel Borkmann <dborkman@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: netdev@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7131/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
We should prevent spamming the logs during normal execution of bpf-jit. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Suggested-by: NAlexei Starovoitov <ast@plumgrid.com> Cc: David S. Miller <davem@davemloft.net> Cc: Daniel Borkmann <dborkman@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: netdev@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7129/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
If VLAN_TAG_PRESENT is not zero, then return 1 as expected by classic BPF. Otherwise return 0. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: David S. Miller <davem@davemloft.net> Cc: Daniel Borkmann <dborkman@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: netdev@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7128/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
Using VLAN_VID_MASK is not correct to get the vlan tag. Use ~VLAN_PRESENT_MASK instead and make sure it's u16 so the top 16-bits will be removed. This will ensure that the emit_andi() code will not treat this as a big 32-bit unsigned value. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: David S. Miller <davem@davemloft.net> Cc: Daniel Borkmann <dborkman@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: netdev@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7127/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
The sltiu and sltu instructions will set the scratch register to 1 if A <= X|K so fix the emitted branch conditional to check for scratch != zero rather than scratch >= zero which would complicate the resuling branch logic given that MIPS does not have a BGT or BGET instructions to compare general purpose registers directly. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: David S. Miller <davem@davemloft.net> Cc: Daniel Borkmann <dborkman@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: netdev@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7126/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
The SKF_AD_PKTTYPE uses the skb pointer so make sure it's in the flags so it will be initialized in time. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: David S. Miller <davem@davemloft.net> Cc: Daniel Borkmann <dborkman@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: netdev@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7125/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
The VLAN_VID_MASK and VLAN_TAG_PRESENT are immediates, so using 'and' which expects 3 registers will produce wrong results. Fix this by using the 'andi' instruction. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: David S. Miller <davem@davemloft.net> Cc: Daniel Borkmann <dborkman@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: netdev@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7124/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
Previously, the negative offset was not checked leading to failures due to trying to load data beyond the skb struct boundaries. Until we have proper asm helpers in place, it's best if we return ENOSUPP if K is negative when trying to JIT the filter or 0 during runtime if we do an indirect load where the value of X is unknown during build time. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: David S. Miller <davem@davemloft.net> Cc: Daniel Borkmann <dborkman@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: netdev@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7123/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
Reading from the HI register to get the division result is wrong. The quotient is placed in the LO register. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Daniel Borkmann <dborkman@redhat.com> Cc: Alexei Starovoitov <ast@plumgrid.com> Cc: netdev@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7122/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
Commit d6b3314b "MIPS: uasm: Add lh uam instruction" added the 'lh' micro-assembler instruction but it used the 'lw' opcode for it. Fix it by using the correct 'lh' opcode. Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7121/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Markos Chandras 提交于
It will be used later on by bpf-jit Signed-off-by: NMarkos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Cc: Markos Chandras <markos.chandras@imgtec.com> Patchwork: https://patchwork.linux-mips.org/patch/7120/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-