- 09 1月, 2014 6 次提交
-
-
由 Andrew Bresticker 提交于
The AudioSS block on Exynos 5420 has an additional clock gate for the ADMA bus clock. Signed-off-by: NAndrew Bresticker <abrestic@chromium.org> Acked-by: NMike Turquette <mturquette@linaro.org> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Signed-off-by: NTomasz Figa <t.figa@samsung.com>
-
由 Andrew Bresticker 提交于
There is no gate for the PCM clock input to the AudioSS block, so the parent of sclk_pcm is div_pcm0. Add a clock ID for it so that we can reference it in device trees. Signed-off-by: NAndrew Bresticker <abrestic@chromium.org> Reviewed-by: NTomasz Figa <t.figa@samsung.com> Acked-by: NMike Turquette <mturquette@linaro.org> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Signed-off-by: NTomasz Figa <t.figa@samsung.com>
-
由 Andrzej Hajda 提交于
The patch adds header file defining clock IDs. This allows to use macros instead of magic numbers in DT bindings. Signed-off-by: NAndrzej Hajda <a.hajda-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org> Signed-off-by: NKyungmin Park <kyungmin.park-Sze3O3UU22JBDgjK7y7TUQ@public.gmane.org> Acked-by: NMike Turquette <mturquette@linaro.org> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Signed-off-by: NTomasz Figa <t.figa@samsung.com>
-
由 Andrzej Hajda 提交于
The patch adds header file defining clock IDs. This allows to use macros instead of magic numbers in DT bindings. Signed-off-by: NAndrzej Hajda <a.hajda@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NMike Turquette <mturquette@linaro.org> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Signed-off-by: NTomasz Figa <t.figa@samsung.com>
-
由 Andrzej Hajda 提交于
The patch adds header file defining clock IDs. This allows to use macros instead of magic numbers in DT bindings. Signed-off-by: NAndrzej Hajda <a.hajda@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NMike Turquette <mturquette@linaro.org> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Signed-off-by: NTomasz Figa <t.figa@samsung.com>
-
由 Andrzej Hajda 提交于
The patch adds header file defining clock IDs. This allows to use macros instead of magic numbers in DT bindings. Signed-off-by: NAndrzej Hajda <a.hajda@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NMike Turquette <mturquette@linaro.org> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Signed-off-by: NTomasz Figa <t.figa@samsung.com>
-
- 28 12月, 2013 1 次提交
-
-
由 Mike Turquette 提交于
Populate ${DEBUGS_MOUNT_POINT}/clk if CONFIG_DEBUG_FS is set. This eliminates the extra (annoying) step of enabling the config option manually. Signed-off-by: NMike Turquette <mturquette@linaro.org>
-
- 23 12月, 2013 2 次提交
-
-
由 Boris BREZILLON 提交于
This patch adds support for accuracy retrieval on fixed clocks. It also adds a new dt property called 'clock-accuracy' to define the clock accuracy. This can be usefull for oscillator (RC, crystal, ...) definitions which are always given an accuracy characteristic. Signed-off-by: NBoris BREZILLON <b.brezillon@overkiz.com> Signed-off-by: NMike Turquette <mturquette@linaro.org>
-
由 Boris BREZILLON 提交于
The clock accuracy is expressed in ppb (parts per billion) and represents the possible clock drift. Say you have a clock (e.g. an oscillator) which provides a fixed clock of 20MHz with an accuracy of +- 20Hz. This accuracy expressed in ppb is 20Hz/20MHz = 1000 ppb (or 1 ppm). Clock users may need the clock accuracy information in order to choose the best clock (the one with the best accuracy) across several available clocks. This patch adds clk accuracy retrieval support for common clk framework by means of a new function called clk_get_accuracy. This function returns the given clock accuracy expressed in ppb. In order to get the clock accuracy, this implementation adds one callback called recalc_accuracy to the clk_ops structure. This callback is given the parent clock accuracy (if the clock is not a root clock) and should recalculate the given clock accuracy. This callback is optional and may be implemented if the clock is not a perfect clock (accuracy != 0 ppb). Signed-off-by: NBoris BREZILLON <b.brezillon@overkiz.com> Signed-off-by: NMike Turquette <mturquette@linaro.org>
-
- 13 12月, 2013 1 次提交
-
-
由 Laurent Pinchart 提交于
The R-Car Gen2 SoCs (R8A7790 and R8A7791) have several clocks that are too custom to be supported in a generic driver. Those clocks can be divided in two categories: - Fixed rate clocks with multiplier and divisor set according to boot mode configuration - Custom divider clocks with SoC-specific divider values This driver supports both. Signed-off-by: NLaurent Pinchart <laurent.pinchart+renesas@ideasonboard.com> Acked-by: NKumar Gala <galak@codeaurora.org> Signed-off-by: NMike Turquette <mturquette@linaro.org>
-
- 05 12月, 2013 2 次提交
-
-
由 Sylwester Nawrocki 提交于
clk_unregister() is currently not implemented and it is required when a clock provider module needs to be unloaded. Normally the clock supplier module is prevented to be unloaded by taking reference on the module in clk_get(). For cases when the clock supplier module deinitializes despite the consumers of its clocks holding a reference on the module, e.g. when the driver is unbound through "unbind" sysfs attribute, there are empty clock ops added. These ops are assigned temporarily to struct clk and used until all consumers release the clock, to avoid invoking callbacks from the module which just got removed. Signed-off-by: NJiada Wang <jiada_wang@mentor.com> Signed-off-by: NSylwester Nawrocki <s.nawrocki@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Sylwester Nawrocki 提交于
This patch adds common __clk_get(), __clk_put() clkdev helpers that replace their platform specific counterparts when the common clock API is used. The owner module pointer field is added to struct clk so a reference to the clock supplier module can be taken by the clock consumers. The owner module is assigned while the clock is being registered, in functions _clk_register() and __clk_register(). Signed-off-by: NSylwester Nawrocki <s.nawrocki@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 12月, 2013 1 次提交
-
-
由 Haojian Zhuang 提交于
Enable common clock driver of Hi3620 SoC. clkgate-seperated driver is used to support the clock gate that enable/disable/status registers are seperated. Signed-off-by: NHaojian Zhuang <haojian.zhuang@gmail.com>
-
- 27 11月, 2013 3 次提交
-
-
由 Peter De Schrijver 提交于
Implement clock support for Tegra124. Signed-off-by: NPeter De Schrijver <pdeschrijver@nvidia.com>
-
由 Peter De Schrijver 提交于
The Tegra30 clock bindings lack few IDs for audio and clk_out muxes. Signed-off-by: NPeter De Schrijver <pdeschrijver@nvidia.com>
-
由 Thierry Reding 提交于
These clocks were named gr2d and gr3d on Tegra20 and Tegra30, so use the same names on Tegra114 for consistency. Signed-off-by: NThierry Reding <treding@nvidia.com> Acked-by: NStephen Warren <swarren@nvidia.com>
-
- 25 11月, 2013 1 次提交
-
-
由 Peter De Schrijver 提交于
commit 992bb598 forgot to move dfll_soc and dfll_ref to include/dt-bindings/clock/tegra114-car.h. Add them again in this patch as TEGRA114_CLK_DFLL_SOC and TEGRA114_CLK_DFLL_REF. Signed-off-by: NPeter De Schrijver <pdeschrijver@nvidia.com>
-
- 22 11月, 2013 5 次提交
-
-
由 Kirill A. Shutemov 提交于
I don't know what went wrong, mis-merge or something, but ->pmd_huge_pte placed in wrong union within struct page. In original patch[1] it's placed to union with ->lru and ->slab, but in commit e009bb30 ("mm: implement split page table lock for PMD level") it's in union with ->index and ->freelist. That union seems also unused for pages with table tables and safe to re-use, but it's not what I've tested. Let's move it to original place. It fixes indentation at least. :) [1] https://lkml.org/lkml/2013/10/7/288Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrea Arcangeli 提交于
Commit 7cb2ef56 ("mm: fix aio performance regression for database caused by THP") can cause dereference of a dangling pointer if split_huge_page runs during PageHuge() if there are updates to the tail_page->private field. Also it is repeating compound_head twice for hugetlbfs and it is running compound_head+compound_trans_head for THP when a single one is needed in both cases. The new code within the PageSlab() check doesn't need to verify that the THP page size is never bigger than the smallest hugetlbfs page size, to avoid memory corruption. A longstanding theoretical race condition was found while fixing the above (see the change right after the skip_unlock label, that is relevant for the compound_lock path too). By re-establishing the _mapcount tail refcounting for all compound pages, this also fixes the below problem: echo 0 >/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages BUG: Bad page state in process bash pfn:59a01 page:ffffea000139b038 count:0 mapcount:10 mapping: (null) index:0x0 page flags: 0x1c00000000008000(tail) Modules linked in: CPU: 6 PID: 2018 Comm: bash Not tainted 3.12.0+ #25 Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 Call Trace: dump_stack+0x55/0x76 bad_page+0xd5/0x130 free_pages_prepare+0x213/0x280 __free_pages+0x36/0x80 update_and_free_page+0xc1/0xd0 free_pool_huge_page+0xc2/0xe0 set_max_huge_pages.part.58+0x14c/0x220 nr_hugepages_store_common.isra.60+0xd0/0xf0 nr_hugepages_store+0x13/0x20 kobj_attr_store+0xf/0x20 sysfs_write_file+0x189/0x1e0 vfs_write+0xc5/0x1f0 SyS_write+0x55/0xb0 system_call_fastpath+0x16/0x1b Signed-off-by: NKhalid Aziz <khalid.aziz@oracle.com> Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com> Tested-by: NKhalid Aziz <khalid.aziz@oracle.com> Cc: Pravin Shelar <pshelar@nicira.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Ben Hutchings <bhutchings@solarflare.com> Cc: Christoph Lameter <cl@linux.com> Cc: Johannes Weiner <jweiner@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Dave Hansen 提交于
Right now, the migration code in migrate_page_copy() uses copy_huge_page() for hugetlbfs and thp pages: if (PageHuge(page) || PageTransHuge(page)) copy_huge_page(newpage, page); So, yay for code reuse. But: void copy_huge_page(struct page *dst, struct page *src) { struct hstate *h = page_hstate(src); and a non-hugetlbfs page has no page_hstate(). This works 99% of the time because page_hstate() determines the hstate from the page order alone. Since the page order of a THP page matches the default hugetlbfs page order, it works. But, if you change the default huge page size on the boot command-line (say default_hugepagesz=1G), then we might not even *have* a 2MB hstate so page_hstate() returns null and copy_huge_page() oopses pretty fast since copy_huge_page() dereferences the hstate: void copy_huge_page(struct page *dst, struct page *src) { struct hstate *h = page_hstate(src); if (unlikely(pages_per_huge_page(h) > MAX_ORDER_NR_PAGES)) { ... Mel noticed that the migration code is really the only user of these functions. This moves all the copy code over to migrate.c and makes copy_huge_page() work for THP by checking for it explicitly. I believe the bug was introduced in commit b32967ff ("mm: numa: Add THP migration for the NUMA working set scanning fault case") [akpm@linux-foundation.org: fix coding-style and comment text, per Naoya Horiguchi] Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com> Acked-by: NMel Gorman <mgorman@suse.de> Reviewed-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Hillf Danton <dhillf@gmail.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Tested-by: NDave Jiang <dave.jiang@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Johannes Berg 提交于
Fix another really stupid bug - I introduced genl_set_err() precisely to be able to adjust the group and reject invalid ones, but then forgot to do so. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Johannes Berg 提交于
Unfortunately, I introduced a tremendously stupid bug into genlmsg_multicast() when doing all those multicast group changes: it adjusts the group number, but then passes it to genlmsg_multicast_netns() which does that again. Somehow, my tests failed to catch this, so add a warning into genlmsg_multicast_netns() and remove the offending group ID adjustment. Also add a warning to the similar code in other functions so people who misuse them are more loudly warned. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 21 11月, 2013 6 次提交
-
-
由 Madalin Bucur 提交于
Add auto-MDI/MDI-X capability for forced (autonegotiation disabled) 10/100 Mbps speeds on Vitesse VSC82x4 PHYs. Exported previously static function genphy_setup_forced() required by the new config_aneg handler in the Vitesse PHY module. Signed-off-by: NMadalin Bucur <madalin.bucur@freescale.com> Signed-off-by: NShruti Kanetkar <Shruti@freescale.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Hannes Frederic Sowa 提交于
This patch now always passes msg->msg_namelen as 0. recvmsg handlers must set msg_namelen to the proper size <= sizeof(struct sockaddr_storage) to return msg_name to the user. This prevents numerous uninitialized memory leaks we had in the recvmsg handlers and makes it harder for new code to accidentally leak uninitialized memory. Optimize for the case recvfrom is called with NULL as address. We don't need to copy the address at all, so set it to NULL before invoking the recvmsg handler. We can do so, because all the recvmsg handlers must cope with the case a plain read() is called on them. read() also sets msg_name to NULL. Also document these changes in include/linux/net.h as suggested by David Miller. Changes since RFC: Set msg->msg_name = NULL if user specified a NULL in msg_name but had a non-null msg_namelen in verify_iovec/verify_compat_iovec. This doesn't affect sendto as it would bail out earlier while trying to copy-in the address. It also more naturally reflects the logic by the callers of verify_iovec. With this change in place I could remove " if (!uaddr || msg_sys->msg_namelen == 0) msg->msg_name = NULL ". This change does not alter the user visible error logic as we ignore msg_namelen as long as msg_name is NULL. Also remove two unnecessary curly brackets in ___sys_recvmsg and change comments to netdev style. Cc: David Miller <davem@davemloft.net> Suggested-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NHannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Steven Rostedt 提交于
Doing an if statement to test some condition to know if we should trigger a tracepoint is pointless when tracing is disabled. This just adds overhead and wastes a branch prediction. This is why the TRACE_EVENT_CONDITION() was created. It places the check inside the jump label so that the branch does not happen unless tracing is enabled. That is, instead of doing: if (em) trace_btrfs_get_extent(root, em); Which is basically this: if (em) if (static_key(trace_btrfs_get_extent)) { Using a TRACE_EVENT_CONDITION() we can just do: trace_btrfs_get_extent(root, em); And the condition trace event will do: if (static_key(trace_btrfs_get_extent)) { if (em) { ... The static key is a non conditional jump (or nop) that is faster than having to check if em is NULL or not. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NJosef Bacik <jbacik@fusionio.com> Signed-off-by: NChris Mason <chris.mason@fusionio.com>
-
由 Linus Torvalds 提交于
This reverts commit ea1e7ed3. Al points out that while the commit *does* actually create a separate slab for the page->ptl allocation, that slab is never actually used, and the code continues to use kmalloc/kfree. Damien Wyart points out that the original patch did have the conversion to use kmem_cache_alloc/free, so it got lost somewhere on its way to me. Revert the half-arsed attempt that didn't do anything. If we really do want the special slab (remember: this is all relevant just for debug builds, so it's not necessarily all that critical) we might as well redo the patch fully. Reported-by: NAl Viro <viro@zeniv.linux.org.uk> Acked-by: NAndrew Morton <akpm@linux-foundation.org> Cc: Kirill A Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Hannes Reinecke 提交于
The supported ALUA states might be different for individual devices, so store it in a separate field. (nab: Remove unnecessary line continuation) Signed-off-by: NHannes Reinecke <hare@suse.de> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
由 Hannes Reinecke 提交于
Signed-off-by: NHannes Reinecke <hare@suse.de> Signed-off-by: NNicholas Bellinger <nab@linux-iscsi.org>
-
- 20 11月, 2013 9 次提交
-
-
由 Thomas Hellstrom 提交于
Addresses "[BUG] completely bonkers use of set_need_resched + VM_FAULT_NOPAGE". In the first occurence it was used to try to be nice while releasing the mmap_sem and retrying the fault to work around a locking inversion. The second occurence was never used. There has been some discussion whether we should change the locking order to mmap_sem -> bo_reserve. This patch doesn't address that issue, and leaves that locking order undefined. The solution that we release the mmap_sem if tryreserve fails and wait for the buffer to become unreserved is something we want in any case, and follows how the core vm system waits for pages to be come unlocked while releasing the mmap_sem. The code also outlines what needs to be changed if we want to establish the locking order as mmap_sem -> bo::reserve. One slight issue that remains with this code is that the fault handler might be prone to starvation if another thread countinously reserves the buffer. IMO that usage pattern is highly unlikely. Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com>
-
由 Johannes Berg 提交于
Register generic netlink multicast groups as an array with the family and give them contiguous group IDs. Then instead of passing the global group ID to the various functions that send messages, pass the ID relative to the family - for most families that's just 0 because the only have one group. This avoids the list_head and ID in each group, adding a new field for the mcast group ID offset to the family. At the same time, this allows us to prevent abusing groups again like the quota and dropmon code did, since we can now check that a family only uses a group it owns. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Johannes Berg 提交于
This doesn't really change anything, but prepares for the next patch that will change the APIs to pass the group ID within the family, rather than the global group ID. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Johannes Berg 提交于
Add a static inline to generic netlink to wrap netlink_set_err() to make it easier to use here - use it in openvswitch (the only generic netlink user of netlink_set_err()). Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Johannes Berg 提交于
There's no reason to have the family pointer there since it can just be passed internally where needed, so remove it. Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Johannes Berg 提交于
There are no users of this API remaining, and we'll soon change group registration to be static (like ops are now) Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Johannes Berg 提交于
The quota code is abusing the genetlink API and is using its family ID as the multicast group ID, which is invalid and may belong to somebody else (and likely will.) Make the quota code use the correct API, but since this is already used as-is by userspace, reserve a family ID for this code and also reserve that group ID to not break userspace assumptions. Acked-by: NJan Kara <jack@suse.cz> Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Johannes Berg 提交于
As suggested by David Miller, make genl_register_family_with_ops() a macro and pass only the array, evaluating ARRAY_SIZE() in the macro, this is a little safer. The openvswitch has some indirection, assing ops/n_ops directly in that code. This might ultimately just assign the pointers in the family initializations, saving the struct genl_family_and_ops and code (once mcast groups are handled differently.) Signed-off-by: NJohannes Berg <johannes.berg@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jens Axboe 提交于
If disk stats are enabled on the queue, a request needs to be marked with REQ_IO_STAT for accounting to be active on that request. This fixes an issue with virtio-blk not showing up in /proc/diskstats after the conversion to blk-mq. Add QUEUE_FLAG_MQ_DEFAULT, setting stats and same cpu-group completion on by default. Reported-by: NDave Chinner <david@fromorbit.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
- 19 11月, 2013 1 次提交
-
-
由 Aurelien Jarno 提交于
linux/raid/md_p.h is using conditionals depending on endianess and fails with an error if neither of __BIG_ENDIAN, __LITTLE_ENDIAN or __BYTE_ORDER are defined, but it doesn't include any header which can define these constants. This make this header unusable alone. This patch adds a #include <asm/byteorder.h> at the beginning of this header to make it usable alone. This is needed to compile klibc on MIPS. Signed-off-by: NAurelien Jarno <aurelien@aurel32.net> Signed-off-by: NNeilBrown <neilb@suse.de>
-
- 18 11月, 2013 2 次提交
-
-
由 Michel Dänzer 提交于
This is required to properly calculate the tiling parameters in userspace. Signed-off-by: NMichel Dänzer <michel.daenzer@amd.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Thomas Hellstrom 提交于
Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com> Reviewed-by: NJakob Bornecrantz <jakob@vmware.com>
-