- 08 3月, 2017 11 次提交
-
-
由 Kees Cook 提交于
Currently, pstore_mkfile() performs a memcpy() of the record contents, so it can live anywhere. However, this is needlessly wasteful. In preparation of pstore_mkfile() keeping the record contents, always allocate a buffer for the contents. Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Kees Cook 提交于
Similar to the pstore_info read() callback, there were too many arguments. This switches to the new struct pstore_record pointer instead. This adds "reason" and "part" to the record structure as well. Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Kees Cook 提交于
The argument list for the pstore_read() interface is unwieldy. This changes passes the new struct pstore_record instead. The erst backend was already doing something similar internally. Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Kees Cook 提交于
Instead of the long list of arguments, just pass the new record struct. Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Kees Cook 提交于
This moves the record decompression logic out to a separate function to avoid the deep indentation. Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Kees Cook 提交于
The read/mkfile pair pass the same arguments and should be cleared between calls. Move to a structure and wipe it after every loop. Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Kees Cook 提交于
Uncommon errors are better to get reported to dmesg so developers can more easily figure out why pstore is unhappy with a backend attempting to register. Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Kees Cook 提交于
Technically, it might be possible for struct pstore_info to go out of scope after the module_put(), so report the backend name first. Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Kees Cook 提交于
When built as a module and running with update_ms >= 0, pstore will Oops during module unload since the work timer is still running. This makes sure the worker is stopped before unloading. Signed-off-by: NKees Cook <keescook@chromium.org> Cc: stable@vger.kernel.org
-
由 Kees Cook 提交于
The per-prz spinlock should be using the dynamic initializer so that lockdep can correctly track it. Without this, under lockdep, we get a warning at boot that the lock is in non-static memory. Fixes: 10970449 ("pstore: Make spinlock per zone instead of global") Fixes: 76d5692a ("pstore: Correctly initialize spinlock and flags") Signed-off-by: NKees Cook <keescook@chromium.org> Cc: stable@vger.kernel.org
-
由 Bhumika Goyal 提交于
The references of pstore_zbackend structures are stored into the pointer zbackend of type struct pstore_zbackend. The pointer zbackend can be made const as it is only dereferenced. After making this change the pstore_zbackend structures whose references are stored into the pointer zbackend can be made const too. File size before: text data bss dec hex filename 4817 541 172 5530 159a fs/pstore/platform.o File size after: text data bss dec hex filename 4865 477 172 5514 158a fs/pstore/platform.o Signed-off-by: NBhumika Goyal <bhumirks@gmail.com> Signed-off-by: NKees Cook <keescook@chromium.org>
-
- 25 2月, 2017 1 次提交
-
-
由 Sven Schmidt 提交于
Update fs/pstore and fs/squashfs to use the updated functions from the new LZ4 module. Link: http://lkml.kernel.org/r/1486321748-19085-5-git-send-email-4sschmid@informatik.uni-hamburg.deSigned-off-by: NSven Schmidt <4sschmid@informatik.uni-hamburg.de> Cc: Bongkyu Kim <bongkyu.kim@lge.com> Cc: Rui Salvaterra <rsalvaterra@gmail.com> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: David S. Miller <davem@davemloft.net> Cc: Anton Vorontsov <anton@enomsg.org> Cc: Colin Cross <ccross@android.com> Cc: Kees Cook <keescook@chromium.org> Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 14 2月, 2017 2 次提交
-
-
由 Kees Cook 提交于
Instead of needing additional checks in callers for unallocated przs, perform the check in the walker, which gives us a more universal way to handle the situation. Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Kees Cook 提交于
The ram backend wasn't always initializing its spinlock correctly. Since it was coming from kzalloc memory, though, it was harmless on architectures that initialize unlocked spinlocks to 0 (at least x86 and ARM). This also fixes a possibly ignored flag setting too. When running under CONFIG_DEBUG_SPINLOCK, the following Oops was visible: [ 0.760836] persistent_ram: found existing buffer, size 29988, start 29988 [ 0.765112] persistent_ram: found existing buffer, size 30105, start 30105 [ 0.769435] persistent_ram: found existing buffer, size 118542, start 118542 [ 0.785960] persistent_ram: found existing buffer, size 0, start 0 [ 0.786098] persistent_ram: found existing buffer, size 0, start 0 [ 0.786131] pstore: using zlib compression [ 0.790716] BUG: spinlock bad magic on CPU#0, swapper/0/1 [ 0.790729] lock: 0xffffffc0d1ca9bb0, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0 [ 0.790742] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.10.0-rc2+ #913 [ 0.790747] Hardware name: Google Kevin (DT) [ 0.790750] Call trace: [ 0.790768] [<ffffff900808ae88>] dump_backtrace+0x0/0x2bc [ 0.790780] [<ffffff900808b164>] show_stack+0x20/0x28 [ 0.790794] [<ffffff9008460ee0>] dump_stack+0xa4/0xcc [ 0.790809] [<ffffff9008113cfc>] spin_dump+0xe0/0xf0 [ 0.790821] [<ffffff9008113d3c>] spin_bug+0x30/0x3c [ 0.790834] [<ffffff9008113e28>] do_raw_spin_lock+0x50/0x1b8 [ 0.790846] [<ffffff9008a2d2ec>] _raw_spin_lock_irqsave+0x54/0x6c [ 0.790862] [<ffffff90083ac3b4>] buffer_size_add+0x48/0xcc [ 0.790875] [<ffffff90083acb34>] persistent_ram_write+0x60/0x11c [ 0.790888] [<ffffff90083aab1c>] ramoops_pstore_write_buf+0xd4/0x2a4 [ 0.790900] [<ffffff90083a9d3c>] pstore_console_write+0xf0/0x134 [ 0.790912] [<ffffff900811c304>] console_unlock+0x48c/0x5e8 [ 0.790923] [<ffffff900811da18>] register_console+0x3b0/0x4d4 [ 0.790935] [<ffffff90083aa7d0>] pstore_register+0x1a8/0x234 [ 0.790947] [<ffffff90083ac250>] ramoops_probe+0x6b8/0x7d4 [ 0.790961] [<ffffff90085ca548>] platform_drv_probe+0x7c/0xd0 [ 0.790972] [<ffffff90085c76ac>] driver_probe_device+0x1b4/0x3bc [ 0.790982] [<ffffff90085c7ac8>] __device_attach_driver+0xc8/0xf4 [ 0.790996] [<ffffff90085c4bfc>] bus_for_each_drv+0xb4/0xe4 [ 0.791006] [<ffffff90085c7414>] __device_attach+0xd0/0x158 [ 0.791016] [<ffffff90085c7b18>] device_initial_probe+0x24/0x30 [ 0.791026] [<ffffff90085c648c>] bus_probe_device+0x50/0xe4 [ 0.791038] [<ffffff90085c35b8>] device_add+0x3a4/0x76c [ 0.791051] [<ffffff90087d0e84>] of_device_add+0x74/0x84 [ 0.791062] [<ffffff90087d19b8>] of_platform_device_create_pdata+0xc0/0x100 [ 0.791073] [<ffffff90087d1a2c>] of_platform_device_create+0x34/0x40 [ 0.791086] [<ffffff900903c910>] of_platform_default_populate_init+0x58/0x78 [ 0.791097] [<ffffff90080831fc>] do_one_initcall+0x88/0x160 [ 0.791109] [<ffffff90090010ac>] kernel_init_freeable+0x264/0x31c [ 0.791123] [<ffffff9008a25bd0>] kernel_init+0x18/0x11c [ 0.791133] [<ffffff9008082ec0>] ret_from_fork+0x10/0x50 [ 0.793717] console [pstore-1] enabled [ 0.797845] pstore: Registered ramoops as persistent store backend [ 0.804647] ramoops: attached 0x100000@0xf7edc000, ecc: 0/0 Fixes: 663deb47 ("pstore: Allow prz to control need for locking") Fixes: 10970449 ("pstore: Make spinlock per zone instead of global") Reported-by: NBrian Norris <briannorris@chromium.org> Signed-off-by: NKees Cook <keescook@chromium.org>
-
- 10 2月, 2017 1 次提交
-
-
由 Brian Norris 提交于
We'll OOPS in ramoops_get_next_prz() if the platform didn't ask for any ftrace zones (i.e., cxt->fprzs will be NULL). Let's just skip this entire FTRACE section if there's no 'fprzs'. Regression seen on a coreboot/depthcharge-based Chromebook. Fixes: 2fbea82b ("pstore: Merge per-CPU ftrace records into one") Cc: Joel Fernandes <joelaf@google.com> Cc: Kees Cook <keescook@chromium.org> Signed-off-by: NBrian Norris <briannorris@chromium.org> Signed-off-by: NKees Cook <keescook@chromium.org>
-
- 16 11月, 2016 11 次提交
-
-
由 Kees Cook 提交于
This adds a check for a NULL platform data, which should only be possible if a driver incorrectly sets up a probe request without also having defined the platform_data structure. This is based on a patch from Geliang Tang. Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Namhyung Kim 提交于
Maybe I'm missing something, but I don't know why it needs to copy the input buffer to psinfo->buf and then write. Instead we can write the input buffer directly. The only implementation that supports console message (i.e. ramoops) already does it for ftrace messages. For the upcoming virtio backend driver, it needs to protect psinfo->buf overwritten from console messages. If it could use ->write_buf method instead of ->write, the problem will be solved easily. Cc: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NNamhyung Kim <namhyung@kernel.org> Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Namhyung Kim 提交于
When update_ms is set, pstore_get_records() will be called when there's a new entry. But unlink can be called at the same time and might contend with the open-read-close loop. Depending on the implementation of platform driver, it may be safe or not. But I think it'd be better to protect those race in the first place. Cc: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: NNamhyung Kim <namhyung@kernel.org> Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Joel Fernandes 提交于
Currently, pstore doesn't have any filters setup for function tracing. This has the associated overhead and may not be useful for users looking for tracing specific set of functions. ftrace's regular function trace filtering is done writing to tracing/set_ftrace_filter however this is not available if not requested. In order to be able to use this feature, the support to request global filtering introduced earlier in the series should be requested before registering the ftrace ops. Here we do the same. Signed-off-by: NJoel Fernandes <joelaf@google.com> Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Kees Cook 提交于
Since "przs" (persistent ram zones) is a general name in the code now, so rename the Oops-dump zones to dprzs from przs. Based on a patch from Nobuhiro Iwamatsu. Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Kees Cook 提交于
When setting ramoops record sizes, sometimes it's not clear which parameters contributed to the allocation failure. This adds a per-zone name and expands the failure reports. Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Joel Fernandes 提交于
Up until this patch, each of the per CPU ftrace buffers appear as a separate ftrace-ramoops-N file. In this patch we merge all the zones into one and populate a single ftrace-ramoops-0 file. Signed-off-by: NJoel Fernandes <joelaf@google.com> [kees: clarified variables names, added -ENOMEM handling] Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Joel Fernandes 提交于
In preparation for merging the per CPU buffers into one buffer when we retrieve the pstore ftrace data, we store the timestamp as a counter in the ftrace pstore record. We store the CPU number as well if !PSTORE_CPU_IN_IP, in this case we shift the counter and may lose ordering there but we preserve the same record size. The timestamp counter is also racy, and not doing any locking or synchronization here results in the benefit of lower overhead. Since we don't care much here for exact ordering of function traces across CPUs, we don't synchronize and may lose some counter updates but I'm ok with that. Using trace_clock() results in much lower performance so avoid using it since we don't want accuracy in timestamp and need a rough ordering to perform merge. Signed-off-by: NJoel Fernandes <joelaf@google.com> [kees: updated commit message, added comments] Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Joel Fernandes 提交于
If the RAMOOPS_FLAG_FTRACE_PER_CPU flag is passed to ramoops pdata, split the ftrace space into multiple zones depending on the number of CPUs. This speeds up the performance of function tracing by about 280% in my tests as we avoid the locking. The trade off being lesser space available per CPU. Let the ramoops user decide which option they want based on pdata flag. Signed-off-by: NJoel Fernandes <joelaf@google.com> [kees: added max_ftrace_cnt to track size, added DT logic and docs] Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Kees Cook 提交于
Currently ramoops_init_przs() is hard wired only for panic dump zone array. In preparation for the ftrace zone array (one zone per-cpu) and pmsg zone array, make the function more generic to be able to handle this case. Heavily based on similar work from Joel Fernandes. Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Joel Fernandes 提交于
In preparation of not locking at all for certain buffers depending on if there's contention, make locking optional depending on the initialization of the prz. Signed-off-by: NJoel Fernandes <joelaf@google.com> [kees: moved locking flag into prz instead of via caller arguments] Signed-off-by: NKees Cook <keescook@chromium.org>
-
- 12 11月, 2016 2 次提交
-
-
由 Joel Fernandes 提交于
PMSG now uses ramoops_pstore_write_buf_user() instead of ...write_buf(). Print a ratelimited warning if gets accidentally called. Signed-off-by: NJoel Fernandes <joelaf@google.com> [kees: adjusted commit log and added -EINVAL return] Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Joel Fernandes 提交于
Currently pstore has a global spinlock for all zones. Since the zones are independent and modify different areas of memory, there's no need to have a global lock, so we should use a per-zone lock as introduced here. Also, when ramoops's ftrace use-case has a FTRACE_PER_CPU flag introduced later, which splits the ftrace memory area into a single zone per CPU, it will eliminate the need for locking. In preparation for this, make the locking optional. Signed-off-by: NJoel Fernandes <joelaf@google.com> [kees: updated commit message] Signed-off-by: NKees Cook <keescook@chromium.org>
-
- 09 11月, 2016 1 次提交
-
-
由 Li Pengcheng 提交于
Without a return after the pr_err(), dumps will collide when two threads call pstore_dump() at the same time. Signed-off-by: NLiu Hailong <liuhailong5@huawei.com> Signed-off-by: NLi Pengcheng <lipengcheng8@huawei.com> Signed-off-by: NLi Zhong <lizhong11@hisilicon.com> [kees: improved commit message] Signed-off-by: NKees Cook <keescook@chromium.org>
-
- 24 10月, 2016 1 次提交
-
-
由 Mauro Carvalho Chehab 提交于
The previous patch renamed several files that are cross-referenced along the Kernel documentation. Adjust the links to point to the right places. Signed-off-by: NMauro Carvalho Chehab <mchehab@s-opensource.com>
-
- 28 9月, 2016 1 次提交
-
-
由 Deepa Dinamani 提交于
CURRENT_TIME macro is not appropriate for filesystems as it doesn't use the right granularity for filesystem timestamps. Use current_time() instead. CURRENT_TIME is also not y2038 safe. This is also in preparation for the patch that transitions vfs timestamps to use 64 bit time and hence make them y2038 safe. As part of the effort current_time() will be extended to do range checks. Hence, it is necessary for all file system timestamps to use current_time(). Also, current_time() will be transitioned along with vfs to be y2038 safe. Note that whenever a single call to current_time() is used to change timestamps in different inodes, it is because they share the same time granularity. Signed-off-by: NDeepa Dinamani <deepa.kernel@gmail.com> Reviewed-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NFelipe Balbi <balbi@kernel.org> Acked-by: NSteven Whitehouse <swhiteho@redhat.com> Acked-by: NRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp> Acked-by: NDavid Sterba <dsterba@suse.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 09 9月, 2016 8 次提交
-
-
由 Geliang Tang 提交于
If cxt->pstore.buf allocated failed, no need to initialize cxt->pstore.buf_lock. So this patch moves spin_lock_init() after the error checking. Signed-off-by: NGeliang Tang <geliangtang@gmail.com> Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Andrew Bresticker 提交于
The ramoops buffer may be mapped as either I/O memory or uncached memory. On ARM64, this results in a device-type (strongly-ordered) mapping. Since unnaligned accesses to device-type memory will generate an alignment fault (regardless of whether or not strict alignment checking is enabled), it is not safe to use memcpy(). memcpy_fromio() is guaranteed to only use aligned accesses, so use that instead. Signed-off-by: NAndrew Bresticker <abrestic@chromium.org> Signed-off-by: NEnric Balletbo Serra <enric.balletbo@collabora.com> Reviewed-by: NPuneet Kumar <puneetster@chromium.org> Signed-off-by: NKees Cook <keescook@chromium.org> Cc: stable@vger.kernel.org
-
由 Furquan Shaikh 提交于
persistent_ram_update uses vmap / iomap based on whether the buffer is in memory region or reserved region. However, both map it as non-cacheable memory. For armv8 specifically, non-cacheable mapping requests use a memory type that has to be accessed aligned to the request size. memcpy() doesn't guarantee that. Signed-off-by: NFurquan Shaikh <furquan@google.com> Signed-off-by: NEnric Balletbo Serra <enric.balletbo@collabora.com> Reviewed-by: NAaron Durbin <adurbin@chromium.org> Reviewed-by: NOlof Johansson <olofj@chromium.org> Tested-by: NFurquan Shaikh <furquan@chromium.org> Signed-off-by: NKees Cook <keescook@chromium.org> Cc: stable@vger.kernel.org
-
由 Mark Salyzyn 提交于
Removing a bounce buffer copy operation in the pmsg driver path is always better. We also gain in overall performance by not requesting a vmalloc on every write as this can cause precious RT tasks, such as user facing media operation, to stall while memory is being reclaimed. Added a write_buf_user to the pstore functions, a backup platform write_buf_user that uses the small buffer that is part of the instance, and implemented a ramoops write_buf_user that only supports PSTORE_TYPE_PMSG. Signed-off-by: NMark Salyzyn <salyzyn@android.com> Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Namhyung Kim 提交于
The ramoops can be configured to enable each pstore type by setting their size. In that case, it'd be better not to register disabled types in the first place. Cc: Anton Vorontsov <anton@enomsg.org> Cc: Colin Cross <ccross@android.com> Cc: Kees Cook <keescook@chromium.org> Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: NNamhyung Kim <namhyung@kernel.org> Signed-off-by: NKees Cook <keescook@chromium.org>
-
由 Namhyung Kim 提交于
This patch adds new PSTORE_FLAGS for each pstore type so that they can be enabled separately. This is a preparation for ongoing virtio-pstore work to support those types flexibly. The PSTORE_FLAGS_FRAGILE is changed to PSTORE_FLAGS_DMESG to preserve the original behavior. Cc: Anton Vorontsov <anton@enomsg.org> Cc: Colin Cross <ccross@android.com> Cc: Kees Cook <keescook@chromium.org> Cc: Tony Luck <tony.luck@intel.com> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Len Brown <lenb@kernel.org> Cc: Matt Fleming <matt@codeblueprint.co.uk> Cc: linux-acpi@vger.kernel.org Cc: linux-efi@vger.kernel.org Signed-off-by: NNamhyung Kim <namhyung@kernel.org> [kees: retained "FRAGILE" for now to make merges easier] Signed-off-by: NKees Cook <keescook@chromium.org>
-
I have here a FPGA behind PCIe which exports SRAM which I use for pstore. Now it seems that the FPGA no longer supports cmpxchg based updates and writes back 0xff…ff and returns the same. This leads to crash during crash rendering pstore useless. Since I doubt that there is much benefit from using cmpxchg() here, I am dropping this atomic access and use the spinlock based version. Cc: Anton Vorontsov <anton@enomsg.org> Cc: Colin Cross <ccross@android.com> Cc: Kees Cook <keescook@chromium.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Rabin Vincent <rabinv@axis.com> Tested-by: NRabin Vincent <rabinv@axis.com> Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: NGuenter Roeck <linux@roeck-us.net> [kees: remove "_locked" suffix since it's the only option now] Signed-off-by: NKees Cook <keescook@chromium.org> Cc: stable@vger.kernel.org
-
A basic rmmod ramoops segfaults. Let's see why. Since commit 34f0ec82 ("pstore: Correct the max_dump_cnt clearing of ramoops") sets ->max_dump_cnt to zero before looping over ->przs but we didn't use it before that either. And since commit ee1d2674 ("pstore: add pstore unregister") we free that memory on rmmod. But even then, we looped until a NULL pointer or ERR. I don't see where it is ensured that the last member is NULL. Let's try this instead: simply error recovery and free. Clean up in error case where resources were allocated. And then, in the free path, rely on ->max_dump_cnt in the free path. Cc: Anton Vorontsov <anton@enomsg.org> Cc: Colin Cross <ccross@android.com> Cc: Kees Cook <keescook@chromium.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Acked-by: NNamhyung Kim <namhyung@kernel.org> Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NKees Cook <keescook@chromium.org> Cc: stable@vger.kernel.org # 4.4.x-
-
- 06 8月, 2016 1 次提交
-
-
由 Hiraku Toyooka 提交于
persistent_ram_zone(=prz) structures are allocated by persistent_ram_new(), which includes vmap() or ioremap(). But they are currently freed by kfree(). This uses persistent_ram_free() for correct this asymmetry usage. Signed-off-by: NHiraku Toyooka <hiraku.toyooka.gu@hitachi.com> Signed-off-by: NNobuhiro Iwamatsu <nobuhiro.iwamatsu.kw@hitachi.com> Cc: Mark Salyzyn <salyzyn@android.com> Cc: Seiji Aguchi <seiji.aguchi.tr@hitachi.com> Signed-off-by: NKees Cook <keescook@chromium.org>
-