- 09 4月, 2015 1 次提交
-
-
由 Ben Dooks 提交于
The dw_mmc driver changes to make the IO accesors endian agnostic did not take into account the fifo accesses do not need to be swapped. To fix this add a mmci_fifo_read/write wrapper to allow these to be passed through the IO without being swapped. Since these are now specific functions, it would be easier just to store the pointer to the fifo registers in the host block instead of the offset to them. So change the host->data_offset to host->fifo_reg (which also means we catch all the places this is read or written). Signed-off-by: NBen Dooks <ben.dooks@codethink.co.uk> Signed-off-by: NJaehoon Chung <jh80.chung@samsung.com> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 08 4月, 2015 1 次提交
-
-
由 Ulf Hansson 提交于
Everybody expects the error field in the struct mmc_command|data to be and int but it's actually an unsigned int. Let's convert it into an int to meet the expectations. Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 31 3月, 2015 1 次提交
-
-
由 Doug Anderson 提交于
In the Designware databook's description of the "Voltage Switch Normal Scenario" it instructs us to set a timer and fail the voltage change if we don't see the voltage change interrupt within 2ms. Let's implement that. Without implementing this I have often been able to reproduce a hang while trying to send CMD11 on an rk3288-based board while constantly ejecting and inserting UHS cards. Signed-off-by: NDoug Anderson <dianders@chromium.org> Signed-off-by: NJaehoon Chung <jh80.chung@samsung.com> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 27 3月, 2015 2 次提交
-
-
由 Andreas Fenkart 提交于
board-rx51 has no card detect pin in the mmc slot, but can detect that the (cell-phone) cover has been removed and the card is accessible. The semantics between cover/card detect differ, the gpio on the slot informs you after the card has been removed, cover removal does not necessarily mean that the card has been removed. This means different code paths are necessary. To complete this we also want different fields in the platform data for cover and card detect. This separation is not pushed all the way down into struct omap2_hsmmc_info which is used to initialize the platform data. If we did that we had to go over all board files and set the new gpio_cod pin to -EINVAL. If we forget one board or some out-of-tree archicture forgets that the default '0' is used which is a valid pin number. Signed-off-by: NAndreas Fenkart <afenkart@gmail.com> Acked-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
由 NeilBrown 提交于
These callbacks have been set to deprecated for some time. The last user (omap_hsmmc) has moved away from using them, which thus enables us to completely remove them. Signed-off-by: NNeilBrown <neil@brown.name> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 23 3月, 2015 3 次提交
-
-
由 Ulf Hansson 提交于
Move the member for card_int_gpio into the struct spear_sdhci. In this way we eliminate the last user of the struct sdhci_plat_data, which enables us to remove the exported header for sdhci-spear. Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
由 Ulf Hansson 提交于
Since there no users of the struct sdhci_host, but the shdci host drivers themselves, let's move the definition of it to the local sdhci header. The exported sdhci header then becomes empty, so let's remove it. Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
由 Scott Branden 提交于
Add quirk to handle broken auto-CMD23. Some controllers do not respond after the first auto-CMD23 is issued. This allows CMD23 to still work (mandatory for the faster UHS-I mode) rather than disabling CMD23 entirely via SDHCI_QUIRK2_HOST_NO_CMD23. Signed-off by: Corneliu Doban <cdoban@broadcom.com> Signed-off-by: NScott Branden <sbranden@broadcom.com> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org>
-
- 18 3月, 2015 1 次提交
-
-
由 Nicolas Dichtel 提交于
The argument 'flags' was missing in ndo_bridge_setlink(). ndo_bridge_dellink() was missing. Fixes: 407af329 ("bridge: Add netlink interface to configure vlans on bridge ports") Fixes: add511b3 ("bridge: add flags argument to ndo_bridge_setlink and ndo_bridge_dellink") CC: Vlad Yasevich <vyasevic@redhat.com> CC: Roopa Prabhu <roopa@cumulusnetworks.com> Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 3月, 2015 1 次提交
-
-
由 Petr Mladek 提交于
There is a notifier that handles live patches for coming and going modules. It takes klp_mutex lock to avoid races with coming and going patches but it does not keep the lock all the time. Therefore the following races are possible: 1. The notifier is called sometime in STATE_MODULE_COMING. The module is visible by find_module() in this state all the time. It means that new patch can be registered and enabled even before the notifier is called. It might create wrong order of stacked patches, see below for an example. 2. New patch could still see the module in the GOING state even after the notifier has been called. It will try to initialize the related object structures but the module could disappear at any time. There will stay mess in the structures. It might even cause an invalid memory access. This patch solves the problem by adding a boolean variable into struct module. The value is true after the coming and before the going handler is called. New patches need to be applied when the value is true and they need to ignore the module when the value is false. Note that we need to know state of all modules on the system. The races are related to new patches. Therefore we do not know what modules will get patched. Also note that we could not simply ignore going modules. The code from the module could be called even in the GOING state until mod->exit() finishes. If we start supporting patches with semantic changes between function calls, we need to apply new patches to any still usable code. See below for an example. Finally note that the patch solves only the situation when a new patch is registered. There are no such problems when the patch is being removed. It does not matter who disable the patch first, whether the normal disable_patch() or the module notifier. There is nothing to do once the patch is disabled. Alternative solutions: ====================== + reject new patches when a patched module is coming or going; this is ugly + wait with adding new patch until the module leaves the COMING and GOING states; this might be dangerous and complicated; we would need to release kgr_lock in the middle of the patch registration to avoid a deadlock with the coming and going handlers; also we might need a waitqueue for each module which seems to be even bigger overhead than the boolean + stop modules from entering COMING and GOING states; wait until modules leave these states when they are already there; looks complicated; we would need to ignore the module that asked to stop the others to avoid a deadlock; also it is unclear what to do when two modules asked to stop others and both are in COMING state (situation when two new patches are applied) + always register/enable new patches and fix up the potential mess (registered patches order) in klp_module_init(); this is nasty and prone to regressions in the future development + add another MODULE_STATE where the kallsyms are visible but the module is not used yet; this looks too complex; the module states are checked on "many" locations Example of patch stacking breakage: =================================== The notifier could _not_ _simply_ ignore already initialized module objects. For example, let's have three patches (P1, P2, P3) for functions a() and b() where a() is from vmcore and b() is from a module M. Something like: a() b() P1 a1() b1() P2 a2() b2() P3 a3() b3(3) If you load the module M after all patches are registered and enabled. The ftrace ops for function a() and b() has listed the functions in this order: ops_a->func_stack -> list(a3,a2,a1) ops_b->func_stack -> list(b3,b2,b1) , so the pointer to b3() is the first and will be used. Then you might have the following scenario. Let's start with state when patches P1 and P2 are registered and enabled but the module M is not loaded. Then ftrace ops for b() does not exist. Then we get into the following race: CPU0 CPU1 load_module(M) complete_formation() mod->state = MODULE_STATE_COMING; mutex_unlock(&module_mutex); klp_register_patch(P3); klp_enable_patch(P3); # STATE 1 klp_module_notify(M) klp_module_notify_coming(P1); klp_module_notify_coming(P2); klp_module_notify_coming(P3); # STATE 2 The ftrace ops for a() and b() then looks: STATE1: ops_a->func_stack -> list(a3,a2,a1); ops_b->func_stack -> list(b3); STATE2: ops_a->func_stack -> list(a3,a2,a1); ops_b->func_stack -> list(b2,b1,b3); therefore, b2() is used for the module but a3() is used for vmcore because they were the last added. Example of the race with going modules: ======================================= CPU0 CPU1 delete_module() #SYSCALL try_stop_module() mod->state = MODULE_STATE_GOING; mutex_unlock(&module_mutex); klp_register_patch() klp_enable_patch() #save place to switch universe b() # from module that is going a() # from core (patched) mod->exit(); Note that the function b() can be called until we call mod->exit(). If we do not apply patch against b() because it is in MODULE_STATE_GOING, it will call patched a() with modified semantic and things might get wrong. [jpoimboe@redhat.com: use one boolean instead of two] Signed-off-by: NPetr Mladek <pmladek@suse.cz> Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com> Acked-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 13 3月, 2015 3 次提交
-
-
由 Guenter Roeck 提交于
sparc:allmodconfig fails to build with: drivers/built-in.o: In function `platform_bus_init': (.init.text+0x3684): undefined reference to `of_platform_register_reconfig_notifier' of_platform_register_reconfig_notifier is only declared if both OF_ADDRESS and OF_DYNAMIC are configured. Yet, the include file only declares a dummy function if OF_DYNAMIC is not configured. The sparc architecture does not configure OF_ADDRESS, but does configure OF_DYNAMIC, causing above error. Fixes: 801d728c ("of/reconfig: Add OF_DYNAMIC notifier for platform_bus_type") Cc: Pantelis Antoniou <pantelis.antoniou@konsulko.com> Signed-off-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NRob Herring <robh@kernel.org>
-
由 Andrey Ryabinin 提交于
include/linux/moduleloader.h is more suitable place for this macro. Also change alignment to PAGE_SIZE for CONFIG_KASAN=n as such alignment already assumed in several places. Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com> Cc: Dmitry Vyukov <dvyukov@google.com> Acked-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrey Ryabinin 提交于
Current approach in handling shadow memory for modules is broken. Shadow memory could be freed only after memory shadow corresponds it is no longer used. vfree() called from interrupt context could use memory its freeing to store 'struct llist_node' in it: void vfree(const void *addr) { ... if (unlikely(in_interrupt())) { struct vfree_deferred *p = this_cpu_ptr(&vfree_deferred); if (llist_add((struct llist_node *)addr, &p->list)) schedule_work(&p->wq); Later this list node used in free_work() which actually frees memory. Currently module_memfree() called in interrupt context will free shadow before freeing module's memory which could provoke kernel crash. So shadow memory should be freed after module's memory. However, such deallocation order could race with kasan_module_alloc() in module_alloc(). Free shadow right before releasing vm area. At this point vfree()'d memory is not used anymore and yet not available for other allocations. New VM_KASAN flag used to indicate that vm area has dynamically allocated shadow memory so kasan frees shadow only if it was previously allocated. Signed-off-by: NAndrey Ryabinin <a.ryabinin@samsung.com> Acked-by: NRusty Russell <rusty@rustcorp.com.au> Cc: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 3月, 2015 2 次提交
-
-
由 Eric Dumazet 提交于
John reported that my previous commit added a regression on his router. This is because sender_cpu & napi_id share a common location, so get_xps_queue() can see garbage and perform an out of bound access. We need to make sure sender_cpu is cleared before doing the transmit, otherwise any NIC busy poll enabled (skb_mark_napi_id()) can trigger this bug. Signed-off-by: NEric Dumazet <edumazet@google.com> Reported-by: NJohn <jw@nuclearfallout.net> Bisected-by: NJohn <jw@nuclearfallout.net> Fixes: 2bd82484 ("xps: fix xps for stacked devices") Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Michael Turquette 提交于
Some drivers compare struct clk pointers as a means of knowing if the two pointers reference the same clock hardware. This behavior is dubious (drivers must not dereference struct clk), but did not cause any regressions until the per-user struct clk patch was merged. Now the test for matching clk's will always fail with per-user struct clk's. clk_is_match is introduced to fix the regression and prevent drivers from comparing the pointers manually. Fixes: 035a61c3 ("clk: Make clk API return per-user struct clk instances") Cc: Russell King <linux@arm.linux.org.uk> Cc: Shawn Guo <shawn.guo@linaro.org> Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com> Signed-off-by: NMichael Turquette <mturquette@linaro.org> [arnd@arndb.de: Fix COMMON_CLK=N && HAS_CLK=Y config] Signed-off-by: NArnd Bergmann <arnd@arndb.de> [sboyd@codeaurora.org: const arguments to clk_is_match() and remove unnecessary ternary operation] Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
-
- 08 3月, 2015 2 次提交
-
-
由 Yun Wu 提交于
Define macros for GITS_CTLR fields to avoid using magic numbers. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NYun Wu <wuyun.wu@huawei.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Link: https://lkml.kernel.org/r/1425659870-11832-11-git-send-email-marc.zyngier@arm.comSigned-off-by: NJason Cooper <jason@lakedaemon.net>
-
由 Marc Zyngier 提交于
The ITS table allocator is only allocating a single page per table. This works fine for most things, but leads to silent lack of interrupt delivery if we end-up with a device that has an ID that is out of the range defined by a single page of memory. Even worse, depending on the page size, behaviour changes, which is not a very good experience. A solution is actually to allocate memory for the full range of ID that the ITS supports. A massive waste memory wise, but at least a safe bet. Tested on a Phytium SoC. Tested-by: NChen Baozi <chenbaozi@kylinos.com.cn> Acked-by: NChen Baozi <chenbaozi@kylinos.com.cn> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Link: https://lkml.kernel.org/r/1425659870-11832-3-git-send-email-marc.zyngier@arm.comSigned-off-by: NJason Cooper <jason@lakedaemon.net>
-
- 07 3月, 2015 2 次提交
-
-
由 Peter Hurley 提交于
ioctl(TIOCGSERIAL|TIOCSSERIAL) report and can change the port->iotype. UART drivers use the UPIO_* definitions, but the uapi header defines parallel values and userspace uses these parallel values for ioctls; thus the userspace values are definitive. Define UPIO_* iotypes in terms of the uapi defines, SERIAL_IO_*; extend the uapi defines to include all values in use by the serial core. Signed-off-by: NPeter Hurley <peter@hurleysoftware.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Peter Hurley 提交于
commit 3ffb1a81 ("serial: core: Add big-endian iotype") re-numbered userspace-dependent values; ioctl(TIOCSSERIAL) can assign the port iotype (which is expected to match the selected i/o accessors), so iotype values must not be changed. Cc: Kevin Cernekee <cernekee@gmail.com> Cc: <stable@vger.kernel.org> # 3.19+ Signed-off-by: NPeter Hurley <peter@hurleysoftware.com> Reviewed-by: NKevin Cernekee <cernekee@gmail.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 06 3月, 2015 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Commit 38106313 (PM / sleep: Re-implement suspend-to-idle handling) overlooked the fact that entering some sufficiently deep idle states by CPUs may cause their local timers to stop and in those cases it is necessary to switch over to a broadcast timer prior to entering the idle state. If the cpuidle driver in use does not provide the new ->enter_freeze callback for any of the idle states, that problem affects suspend-to-idle too, but it is not taken into account after the changes made by commit 38106313. Fix that by changing the definition of cpuidle_enter_freeze() and re-arranging of the code in cpuidle_idle_call(), so the former does not call cpuidle_enter() any more and the fallback case is handled by cpuidle_idle_call() directly. Fixes: 38106313 (PM / sleep: Re-implement suspend-to-idle handling) Reported-and-tested-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
-
- 05 3月, 2015 3 次提交
-
-
由 Tejun Heo 提交于
cancel[_delayed]_work_sync() are implemented using __cancel_work_timer() which grabs the PENDING bit using try_to_grab_pending() and then flushes the work item with PENDING set to prevent the on-going execution of the work item from requeueing itself. try_to_grab_pending() can always grab PENDING bit without blocking except when someone else is doing the above flushing during cancelation. In that case, try_to_grab_pending() returns -ENOENT. In this case, __cancel_work_timer() currently invokes flush_work(). The assumption is that the completion of the work item is what the other canceling task would be waiting for too and thus waiting for the same condition and retrying should allow forward progress without excessive busy looping Unfortunately, this doesn't work if preemption is disabled or the latter task has real time priority. Let's say task A just got woken up from flush_work() by the completion of the target work item. If, before task A starts executing, task B gets scheduled and invokes __cancel_work_timer() on the same work item, its try_to_grab_pending() will return -ENOENT as the work item is still being canceled by task A and flush_work() will also immediately return false as the work item is no longer executing. This puts task B in a busy loop possibly preventing task A from executing and clearing the canceling state on the work item leading to a hang. task A task B worker executing work __cancel_work_timer() try_to_grab_pending() set work CANCELING flush_work() block for work completion completion, wakes up A __cancel_work_timer() while (forever) { try_to_grab_pending() -ENOENT as work is being canceled flush_work() false as work is no longer executing } This patch removes the possible hang by updating __cancel_work_timer() to explicitly wait for clearing of CANCELING rather than invoking flush_work() after try_to_grab_pending() fails with -ENOENT. Link: http://lkml.kernel.org/g/20150206171156.GA8942@axis.com v3: bit_waitqueue() can't be used for work items defined in vmalloc area. Switched to custom wake function which matches the target work item and exclusive wait and wakeup. v2: v1 used wake_up() on bit_waitqueue() which leads to NULL deref if the target bit waitqueue has wait_bit_queue's on it. Use DEFINE_WAIT_BIT() and __wake_up_bit() instead. Reported by Tomeu Vizoso. Signed-off-by: NTejun Heo <tj@kernel.org> Reported-by: NRabin Vincent <rabin.vincent@axis.com> Cc: Tomeu Vizoso <tomeu.vizoso@gmail.com> Cc: stable@vger.kernel.org Tested-by: NJesper Nilsson <jesper.nilsson@axis.com> Tested-by: NRabin Vincent <rabin.vincent@axis.com>
-
由 Linus Walleij 提交于
This reverts commit 5a7d2efd. As per discussion on the mailing list, this is not the right thing to do. NULL cookies are valid in the stubs. Reported-by: NWolfram Sang <wsa@the-dreams.de> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
-
由 Rafael J. Wysocki 提交于
It currently is required that all users of NO_SUSPEND interrupt lines pass the IRQF_NO_SUSPEND flag when requesting the IRQ or the WARN_ON_ONCE() in irq_pm_install_action() will trigger. That is done to warn about situations in which unprepared interrupt handlers may be run unnecessarily for suspended devices and may attempt to access those devices by mistake. However, it may cause drivers that have no technical reasons for using IRQF_NO_SUSPEND to set that flag just because they happen to share the interrupt line with something like a timer. Moreover, the generic handling of wakeup interrupts introduced by commit 9ce7a258 (genirq: Simplify wakeup mechanism) only works for IRQs without any NO_SUSPEND users, so the drivers of wakeup devices needing to use shared NO_SUSPEND interrupt lines for signaling system wakeup generally have to detect wakeup in their interrupt handlers. Thus if they happen to share an interrupt line with a NO_SUSPEND user, they also need to request that their interrupt handlers be run after suspend_device_irqs(). In both cases the reason for using IRQF_NO_SUSPEND is not because the driver in question has a genuine need to run its interrupt handler after suspend_device_irqs(), but because it happens to share the line with some other NO_SUSPEND user. Otherwise, the driver would do without IRQF_NO_SUSPEND just fine. To make it possible to specify that condition explicitly, introduce a new IRQ action handler flag for shared IRQs, IRQF_COND_SUSPEND, that, when set, will indicate to the IRQ core that the interrupt user is generally fine with suspending the IRQ, but it also can tolerate handler invocations after suspend_device_irqs() and, in particular, it is capable of detecting system wakeup and triggering it as appropriate from its interrupt handler. That will allow us to work around a problem with a shared timer interrupt line on at91 platforms. Link: http://marc.info/?l=linux-kernel&m=142252777602084&w=2 Link: http://marc.info/?t=142252775300011&r=1&w=2 Link: https://lkml.org/lkml/2014/12/15/552Reported-by: NBoris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Acked-by: NMark Rutland <mark.rutland@arm.com>
-
- 04 3月, 2015 1 次提交
-
-
由 Trond Myklebust 提交于
When invalidating the page cache for a regular file, we want to first sync all dirty data to disk and then call invalidate_inode_pages2(). The latter relies on nfs_launder_page() and nfs_release_page() to deal respectively with dirty pages, and unstable written pages. When commit 95905446 ("NFS: avoid deadlocks with loop-back mounted NFS filesystems.") changed the behaviour of nfs_release_page(), then it made it possible for invalidate_inode_pages2() to fail with an EBUSY. Unfortunately, that error is then propagated back to read(). Let's therefore work around the problem for now by protecting the call to sync the data and invalidate_inode_pages2() so that they are atomic w.r.t. the addition of new writes. Later on, we can revisit whether or not we still need nfs_launder_page() and nfs_release_page(). Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com>
-
- 03 3月, 2015 2 次提交
-
-
由 Marcin Bis 提交于
alway -> always Signed-off-by: NMarcin Bis <marcin@bis.org.pl> Signed-off-by: NMark Brown <broonie@kernel.org>
-
由 Or Gerlitz 提交于
The bit mask for currently supported driver features (MLX4_UPDATE_QP_SUPPORTED_ATTRS) of the update-qp command was defined twice (using enum value and pre-processor define directive) and wrong. The return value of the call to mlx4_update_qp() from within the SRIOV resource-tracker was wrongly voided down. Fix both issues. issue: none Fixes: 09e05c3f ('net/mlx4: Set vlan stripping policy by the right command') Fixes: ce8d9e0d ('net/mlx4_core: Add UPDATE_QP SRIOV wrapper support') Signed-off-by: NMatan Barak <matanb@mellanox.com> Signed-off-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 02 3月, 2015 3 次提交
-
-
由 Trond Myklebust 提交于
Ensure that other operations that race with our write RPC calls cannot revert the file size updates that were made on the server. Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Tested-by: NChuck Lever <chuck.lever@oracle.com>
-
由 Trond Myklebust 提交于
Ensure that other operations which raced with our setattr RPC call cannot revert the file attribute changes that were made on the server. To do so, we artificially bump the attribute generation counter on the inode so that all calls to nfs_fattr_init() that precede ours will be dropped. The motivation for the patch came from Chuck Lever's reports of readaheads racing with truncate operations and causing the file size to be reverted. Reported-by: NChuck Lever <chuck.lever@oracle.com> Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Tested-by: NChuck Lever <chuck.lever@oracle.com>
-
由 Trond Myklebust 提交于
Signed-off-by: NTrond Myklebust <trond.myklebust@primarydata.com> Tested-by: NChuck Lever <chuck.lever@oracle.com>
-
- 28 2月, 2015 2 次提交
-
-
由 Daniel Borkmann 提交于
Currently, all real users of rhashtable default their grow and shrink decision functions to rht_grow_above_75() and rht_shrink_below_30(), so that there's currently no need to have this explicitly selectable. It can/should be generic and private inside rhashtable until a real use case pops up. Since we can make this private, we'll save us this additional indirection layer and can improve insertion/deletion time as well. Reference: http://patchwork.ozlabs.org/patch/443040/Suggested-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NThomas Graf <tgraf@suug.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Mikulas Patocka 提交于
The "dm snapshot: suspend origin when doing exception handover" commit fixed a exception store handover bug associated with pending exceptions to the "snapshot-origin" target. However, a similar problem exists in snapshot merging. When snapshot merging is in progress, we use the target "snapshot-merge" instead of "snapshot-origin". Consequently, during exception store handover, we must find the snapshot-merge target and suspend its associated mapped_device. To avoid lockdep warnings, the target must be suspended and resumed without holding _origins_lock. Introduce a dm_hold() function that grabs a reference on a mapped_device, but unlike dm_get(), it doesn't crash if the device has the DMF_FREEING flag set, it returns an error in this case. In snapshot_resume() we grab the reference to the origin device using dm_hold() while holding _origins_lock (_origins_lock guarantees that the device won't disappear). Then we release _origins_lock, suspend the device and grab _origins_lock again. NOTE to stable@ people: When backporting to kernels 3.18 and older, use dm_internal_suspend and dm_internal_resume instead of dm_internal_suspend_fast and dm_internal_resume_fast. Signed-off-by: NMikulas Patocka <mpatocka@redhat.com> Signed-off-by: NMike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org
-
- 27 2月, 2015 1 次提交
-
-
由 Johan Hovold 提交于
This reverts commit 5083fd7b. A bulk-out size smaller than the end-point size is indeed valid. The offending commit broke the usb-debug driver for EHCI debug devices, which use 8-byte buffers. Fixes: 5083fd7b ("USB: serial: make bulk_out_size a lower limit") Reported-by: N"Li, Elvin" <elvin.li@intel.com> Cc: stable <stable@vger.kernel.org> # v3.15 Signed-off-by: NJohan Hovold <johan@kernel.org>
-
- 26 2月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
The IRQF_NO_SUSPEND flag is intended to be used for interrupts required to be enabled during the suspend-resume cycle. This mostly consists of IPIs and timer interrupts, potentially including chained irqchip interrupts if these are necessary to handle timers or IPIs. If an interrupt does not fall into one of the aforementioned categories, requesting it with IRQF_NO_SUSPEND is likely incorrect. Using IRQF_NO_SUSPEND does not guarantee that the interrupt can wake the system from a suspended state. For an interrupt to be able to trigger a wakeup, it may be necessary to program various components of the system. In these cases it is necessary to use {enable,disabled}_irq_wake. Unfortunately, several drivers assume that IRQF_NO_SUSPEND ensures that an IRQ can wake up the system, and the documentation can be read ambiguously w.r.t. this property. This patch updates the documentation regarding IRQF_NO_SUSPEND to make this caveat explicit, hopefully making future misuse rarer. Cleanup of existing misuse will occur as part of later patch series. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 25 2月, 2015 1 次提交
-
-
由 Nishanth Menon 提交于
When CONFIG_THERMAL is not enabled, it is better to introduce equivalent dummy functions in the exported header than to introduce #ifdeffery in drivers using the function. This will prevent issues such as that reported in: http://www.spinics.net/lists/linux-next/msg31573.html While at it switch over to IS_ENABLED for thermal macros to allow for thermal framework to be built as framework and relevant APIs be usable by relevant drivers as a result. Reported-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NNishanth Menon <nm@ti.com> Signed-off-by: NEduardo Valentin <edubezval@gmail.com>
-
- 23 2月, 2015 4 次提交
-
-
由 David Howells 提交于
Split DCACHE_FILE_TYPE into DCACHE_REGULAR_TYPE (dentries representing regular files) and DCACHE_SPECIAL_TYPE (representing blockdev, chardev, FIFO and socket files). d_is_reg() and d_is_special() are added to detect these subtypes and d_is_file() is left as the union of the two. This allows a number of places that use S_ISREG(dentry->d_inode->i_mode) to use d_is_reg(dentry) instead. Signed-off-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 David Howells 提交于
Add a DCACHE_FALLTHRU flag to indicate that, in a layered filesystem, this is a virtual dentry that covers another one in a lower layer that should be used instead. This may be recorded on medium if directory integration is stored there. The flag can be set with d_set_fallthru() and tested with d_is_fallthru(). Original-author: Valerie Aurora <vaurora@redhat.com> Signed-off-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 David Howells 提交于
Add DCACHE_WHITEOUT_TYPE and provide a d_is_whiteout() accessor function. A d_is_miss() accessor is also added for ordinary cache misses and d_is_negative() is modified to indicate either an ordinary miss or an enforced miss (whiteout). Signed-off-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 David Howells 提交于
Introduce some function for getting the inode (and also the dentry) in an environment where layered/unioned filesystems are in operation. The problem is that we have places where we need *both* the union dentry and the lower source or workspace inode or dentry available, but we can only have a handle on one of them. Therefore we need to derive the handle to the other from that. The idea is to introduce an extra field in struct dentry that allows the union dentry to refer to and pin the lower dentry. Signed-off-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 22 2月, 2015 2 次提交
-
-
由 Eric Dumazet 提交于
struct bucket_table contains mostly read fields : size, locks_mask, locks. Make sure these are not sharing a cache line with buckets[] Signed-off-by: NEric Dumazet <edumazet@google.com> Acked-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NThomas Graf <tgraf@suug.ch> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Linus Torvalds 提交于
The use of READ_ONCE() causes lots of warnings witht he pending paravirt spinlock fixes, because those ends up having passing a member to a 'const' structure to READ_ONCE(). There should certainly be nothing wrong with using READ_ONCE() with a const source, but the helper function __read_once_size() would cause warnings because it would drop the 'const' qualifier, but also because the destination would be marked 'const' too due to the use of 'typeof'. Use a union of types in READ_ONCE() to avoid this issue. Also make sure to use parenthesis around the macro arguments to avoid possible operator precedence issues. Tested-by: NIngo Molnar <mingo@kernel.org> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-