- 17 7月, 2017 2 次提交
-
-
由 Biju Das 提交于
Add a new compatible string for the RZ/G1M (R8A7743) SoC. Signed-off-by: NBiju Das <biju.das@bp.renesas.com> Reviewed-by: NGeert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: NSimon Horman <horms+renesas@verge.net.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alvaro G. M 提交于
Keep supporting proprietary "xlnx,phy-type" attribute and add support for MII connectivity to the PHY. Reviewed-by: NAndrew Lunn <andrew@lunn.ch> Signed-off-by: NAlvaro Gamez Machado <alvaro.gamez@hazent.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 7月, 2017 1 次提交
-
-
由 Ahmad Fatoum 提交于
Passing (void*)val instead of &val would make a pointer out of an integer and cause sock_setsockopt to -EFAULT. See tools/testing/selftests/networking/timestamping/timestamping.c for a working example. Cc: David S. Miller <davem@davemloft.net> Cc: netdev@vger.kernel.org Signed-off-by: NAhmad Fatoum <ahmad@a3f.at> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 7月, 2017 4 次提交
-
-
由 Michal Hocko 提交于
It seems that there are still people using 32b kernels which a lot of memory and the IO tend to suck a lot for them by default. Mostly because writers are throttled too when the lowmem is used. We have highmem_is_dirtyable to work around that issue but it seems we never bothered to document it. Let's do it now, finally. Link: http://lkml.kernel.org/r/20170626093200.18958-1-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com> Acked-by: NJohannes Weiner <hannes@cmpxchg.org> Cc: Alkis Georgopoulos <alkisg@gmail.com> Cc: Mel Gorman <mgorman@suse.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michal Hocko 提交于
movable_node kernel parameter allows making hotpluggable NUMA nodes to put all the hotplugable memory into movable zone which allows more or less reliable memory hotremove. At least this is the case for the NUMA nodes present during the boot (see find_zone_movable_pfns_for_nodes). This is not the case for the memory hotplug, though. echo online > /sys/devices/system/memory/memoryXYZ/state will default to a kernel zone (usually ZONE_NORMAL) unless the particular memblock is already in the movable zone range which is not the case normally when onlining the memory from the udev rule context for a freshly hotadded NUMA node. The only option currently is to have a special udev rule to echo online_movable to all memblocks belonging to such a node which is rather clumsy. Not to mention this is inconsistent as well because what ended up in the movable zone during the boot will end up in a kernel zone after hotremove & hotadd without special care. It would be nice to reuse memblock_is_hotpluggable but the runtime hotplug doesn't have that information available because the boot and hotplug paths are not shared and it would be really non trivial to make them use the same code path because the runtime hotplug doesn't play with the memblock allocator at all. Teach move_pfn_range that MMOP_ONLINE_KEEP can use the movable zone if movable_node is enabled and the range doesn't overlap with the existing normal zone. This should provide a reasonable default onlining strategy. Strictly speaking the semantic is not identical with the boot time initialization because find_zone_movable_pfns_for_nodes covers only the hotplugable range as described by the BIOS/FW. From my experience this is usually a full node though (except for Node0 which is special and never goes away completely). If this turns out to be a problem in the real life we can tweak the code to store hotplug flag into memblocks but let's keep this simple now. Link: http://lkml.kernel.org/r/20170612111227.GI7476@dhcp22.suse.czSigned-off-by: NMichal Hocko <mhocko@suse.com> Acked-by: NVlastimil Babka <vbabka@suse.cz> Acked-by: NReza Arbab <arbab@linux.vnet.ibm.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Yasuaki Ishimatsu <yasu.isimatu@gmail.com> Cc: <qiuxishi@huawei.com> Cc: Kani Toshimitsu <toshi.kani@hpe.com> Cc: <slaoub@gmail.com> Cc: Joonsoo Kim <js1304@gmail.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Rientjes <rientjes@google.com> Cc: Daniel Kiper <daniel.kiper@oracle.com> Cc: Igor Mammedov <imammedo@redhat.com> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 David Rientjes 提交于
By default, vmpressure events are not pass-through, i.e. they propagate up through the memcg hierarchy until an event notifier is found for any threshold level. This presents a difficulty when a thread waiting on a read(2) for a vmpressure event cannot distinguish between local memory pressure and memory pressure in a descendant memcg, especially when that thread may not control the memcg hierarchy. Consider a user-controlled child memcg with a smaller limit than a top-level memcg controlled by the "Activity Manager" specified in Documentation/cgroup-v1/memory.txt. It may register for memory pressure notification for descendant memcgs to make a policy decision: oom kill a low priority job, increase the limit, decrease other limits, etc. If it registers for memory pressure notification on the top-level memcg, it currently cannot distinguish between memory pressure in its own memcg or a descendant memcg, which is user-controlled. Conversely, if a user registers for memory pressure notification on their own descendant memcg, the Activity Manager does not receive any pressure notification for that child memcg hierarchy. Vmpressure events are not received for ancestor memcgs if the memcg experiencing pressure have notifiers registered, perhaps outside the knowledge of the thread waiting on read(2) at the top level. Both of these are consequences of vmpressure notification not being pass-through. This implements a pass-through behavior for vmpressure events. When writing to control.event_control, vmpressure event handlers may optionally specify a mode. There are two new modes: - "hierarchy": always propagate memory pressure events up the hierarchy regardless if descendant memcgs have their own notifiers registered, and - "local": only receive notifications when the memcg for which the event is registered experiences memory pressure. Of course, processes may register for one notification of "low,local", for example, and another for "low". If no mode is specified, the current behavior is maintained for backwards compatibility. See the change to Documentation/cgroup-v1/memory.txt for full specification. [dan.carpenter@oracle.com: free the same pointer we allocated] Link: http://lkml.kernel.org/r/20170613191820.GA20003@elgon.mountain Link: http://lkml.kernel.org/r/alpine.DEB.2.10.1705311421320.8946@chino.kir.corp.google.comSigned-off-by: NDavid Rientjes <rientjes@google.com> Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Minchan Kim <minchan@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Anton Vorontsov <anton@enomsg.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Linus Torvalds 提交于
Commit ac6424b9 ("sched/wait: Rename wait_queue_t => wait_queue_entry_t") had scripted the renaming incorrectly, and didn't actually check that the 'wait_queue_t' was a full token. As a result, it also triggered on 'wait_queue_token', and renamed that to 'wait_queue_entry_token' entry in the autofs4 packet structure definition too. That was entirely incorrect, and not intended. The end result built fine when building just the kernel - because everything had been renamed consistently there - but caused problems in user space because the "struct autofs_packet_missing" type is exported as part of the uapi. This scripts it all back again: git grep -lw wait_queue_entry_token | xargs sed -i 's/wait_queue_entry_token/wait_queue_token/g' and checks the end result. Reported-by: NFlorian Fainelli <f.fainelli@gmail.com> Acked-by: NIngo Molnar <mingo@kernel.org> Fixes: ac6424b9 ("sched/wait: Rename wait_queue_t => wait_queue_entry_t") Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 7月, 2017 1 次提交
-
-
由 Chao Yu 提交于
This patch adds to support plain user/group quota. Change Note by Jaegeuk Kim. - Use f2fs page cache for quota files in order to consider garbage collection. so, quota files are not tolerable for sudden power-cuts, so user needs to do quotacheck. - setattr() calls dquot_transfer which will transfer inode->i_blocks. We can't reclaim that during f2fs_evict_inode(). So, we need to count node blocks as well in order to match i_blocks with dquot's space. Note that, Chao wrote a patch to count inode->i_blocks without inode block. (f2fs: don't count inode block in in-memory inode.i_blocks) - in f2fs_remount, we need to make RW in prior to dquot_resume. - handle fault_injection case during f2fs_quota_off_umount - TODO: Project quota Signed-off-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
- 08 7月, 2017 2 次提交
-
-
由 Nicolas Dichtel 提交于
Those enum values don't exist anymore. Fixes: 7e13318d ("net: define gso types for IPx over IPv4 and IPv6") CC: Tom Herbert <tom@herbertland.com> Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Chao Yu 提交于
In this patch, we add a new sysfs interface, with it, we can control number of reserved blocks in system which could not be used by user, it enable f2fs to let user to configure for adjusting over-provision ratio dynamically instead of changing it by mkfs. So we can expect it will help to reserve more free space for relieving GC in both filesystem and flash device. Signed-off-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
- 07 7月, 2017 6 次提交
-
-
由 Michal Hocko 提交于
Commit 20b2f52b ("numa: add CONFIG_MOVABLE_NODE for movable-dedicated node") has introduced CONFIG_MOVABLE_NODE without a good explanation on why it is actually useful. It makes a lot of sense to make movable node semantic opt in but we already have that because the feature has to be explicitly enabled on the kernel command line. A config option on top only makes the configuration space larger without a good reason. It also adds an additional ifdefery that pollutes the code. Just drop the config option and make it de-facto always enabled. This shouldn't introduce any change to the semantic. Link: http://lkml.kernel.org/r/20170529114141.536-3-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com> Acked-by: NReza Arbab <arbab@linux.vnet.ibm.com> Acked-by: NVlastimil Babka <vbabka@suse.cz> Cc: Mel Gorman <mgorman@suse.de> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Jerome Glisse <jglisse@redhat.com> Cc: Yasuaki Ishimatsu <yasu.isimatu@gmail.com> Cc: Xishi Qiu <qiuxishi@huawei.com> Cc: Kani Toshimitsu <toshi.kani@hpe.com> Cc: Chen Yucong <slaoub@gmail.com> Cc: Joonsoo Kim <js1304@gmail.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: David Rientjes <rientjes@google.com> Cc: Daniel Kiper <daniel.kiper@oracle.com> Cc: Igor Mammedov <imammedo@redhat.com> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Konstantin Khlebnikov 提交于
Show count of oom killer invocations in /proc/vmstat and count of processes killed in memory cgroup in knob "memory.events" (in memory.oom_control for v1 cgroup). Also describe difference between "oom" and "oom_kill" in memory cgroup documentation. Currently oom in memory cgroup kills tasks iff shortage has happened inside page fault. These counters helps in monitoring oom kills - for now the only way is grepping for magic words in kernel log. [akpm@linux-foundation.org: fix for mem_cgroup_count_vm_event() rename] [akpm@linux-foundation.org: fix comment, per Konstantin] Link: http://lkml.kernel.org/r/149570810989.203600.9492483715840752937.stgit@buzzSigned-off-by: NKonstantin Khlebnikov <khlebnikov@yandex-team.ru> Cc: Michal Hocko <mhocko@kernel.org> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: Roman Guschin <guroan@gmail.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Roman Gushchin 提交于
Track the following reclaim counters for every memory cgroup: PGREFILL, PGSCAN, PGSTEAL, PGACTIVATE, PGDEACTIVATE, PGLAZYFREE and PGLAZYFREED. These values are exposed using the memory.stats interface of cgroup v2. The meaning of each value is the same as for global counters, available using /proc/vmstat. Also, for consistency, rename mem_cgroup_count_vm_event() to count_memcg_event_mm(). Link: http://lkml.kernel.org/r/1494530183-30808-1-git-send-email-guro@fb.comSigned-off-by: NRoman Gushchin <guro@fb.com> Suggested-by: NJohannes Weiner <hannes@cmpxchg.org> Acked-by: NMichal Hocko <mhocko@suse.com> Acked-by: NVladimir Davydov <vdavydov.dev@gmail.com> Acked-by: NJohannes Weiner <hannes@cmpxchg.org> Cc: Tejun Heo <tj@kernel.org> Cc: Li Zefan <lizefan@huawei.com> Cc: Balbir Singh <bsingharora@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Catalin Marinas 提交于
Kmemleak requires that vmalloc'ed objects have a minimum reference count of 2: one in the corresponding vm_struct object and the other owned by the vmalloc() caller. There are cases, however, where the original vmalloc() returned pointer is lost and, instead, a pointer to vm_struct is stored (see free_thread_stack()). Kmemleak currently reports such objects as leaks. This patch adds support for treating any surplus references to an object as additional references to a specified object. It introduces the kmemleak_vmalloc() API function which takes a vm_struct pointer and sets its surplus reference passing to the actual vmalloc() returned pointer. The __vmalloc_node_range() calling site has been modified accordingly. Link: http://lkml.kernel.org/r/1495726937-23557-4-git-send-email-catalin.marinas@arm.comSigned-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: N"Luis R. Rodriguez" <mcgrof@kernel.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: "Luis R. Rodriguez" <mcgrof@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrea Arcangeli 提交于
Without a max deduplication limit for each KSM page, the list of the rmap_items associated to each stable_node can grow infinitely large. During the rmap walk each entry can take up to ~10usec to process because of IPIs for the TLB flushing (both for the primary MMU and the secondary MMUs with the MMU notifier). With only 16GB of address space shared in the same KSM page, that would amount to dozens of seconds of kernel runtime. A ~256 max deduplication factor will reduce the latencies of the rmap walks on KSM pages to order of a few msec. Just doing the cond_resched() during the rmap walks is not enough, the list size must have a limit too, otherwise the caller could get blocked in (schedule friendly) kernel computations for seconds, unexpectedly. There's room for optimization to significantly reduce the IPI delivery cost during the page_referenced(), but at least for page_migration in the KSM case (used by hard NUMA bindings, compaction and NUMA balancing) it may be inevitable to send lots of IPIs if each rmap_item->mm is active on a different CPU and there are lots of CPUs. Even if we ignore the IPI delivery cost, we've still to walk the whole KSM rmap list, so we can't allow millions or billions (ulimited) number of entries in the KSM stable_node rmap_item lists. The limit is enforced efficiently by adding a second dimension to the stable rbtree. So there are three types of stable_nodes: the regular ones (identical as before, living in the first flat dimension of the stable rbtree), the "chains" and the "dups". Every "chain" and all "dups" linked into a "chain" enforce the invariant that they represent the same write protected memory content, even if each "dup" will be pointed by a different KSM page copy of that content. This way the stable rbtree lookup computational complexity is unaffected if compared to an unlimited max_sharing_limit. It is still enforced that there cannot be KSM page content duplicates in the stable rbtree itself. Adding the second dimension to the stable rbtree only after the max_page_sharing limit hits, provides for a zero memory footprint increase on 64bit archs. The memory overhead of the per-KSM page stable_tree and per virtual mapping rmap_item is unchanged. Only after the max_page_sharing limit hits, we need to allocate a stable_tree "chain" and rb_replace() the "regular" stable_node with the newly allocated stable_node "chain". After that we simply add the "regular" stable_node to the chain as a stable_node "dup" by linking hlist_dup in the stable_node_chain->hlist. This way the "regular" (flat) stable_node is converted to a stable_node "dup" living in the second dimension of the stable rbtree. During stable rbtree lookups the stable_node "chain" is identified as stable_node->rmap_hlist_len == STABLE_NODE_CHAIN (aka is_stable_node_chain()). When dropping stable_nodes, the stable_node "dup" is identified as stable_node->head == STABLE_NODE_DUP_HEAD (aka is_stable_node_dup()). The STABLE_NODE_DUP_HEAD must be an unique valid pointer never used elsewhere in any stable_node->head/node to avoid a clashes with the stable_node->node.rb_parent_color pointer, and different from &migrate_nodes. So the second field of &migrate_nodes is picked and verified as always safe with a BUILD_BUG_ON in case the list_head implementation changes in the future. The STABLE_NODE_DUP is picked as a random negative value in stable_node->rmap_hlist_len. rmap_hlist_len cannot become negative when it's a "regular" stable_node or a stable_node "dup". The stable_node_chain->nid is irrelevant. The stable_node_chain->kpfn is aliased in a union with a time field used to rate limit the stable_node_chain->hlist prunes. The garbage collection of the stable_node_chain happens lazily during stable rbtree lookups (as for all other kind of stable_nodes), or while disabling KSM with "echo 2 >/sys/kernel/mm/ksm/run" while collecting the entire stable rbtree. While the "regular" stable_nodes and the stable_node "dups" must wait for their underlying tree_page to be freed before they can be freed themselves, the stable_node "chains" can be freed immediately if the stable_node->hlist turns empty. This is because the "chains" are never pointed by any page->mapping and they're effectively stable rbtree KSM self contained metadata. [akpm@linux-foundation.org: fix non-NUMA build] Signed-off-by: NAndrea Arcangeli <aarcange@redhat.com> Tested-by: NPetr Holasek <pholasek@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Evgheni Dereveanchin <ederevea@redhat.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Gavin Guo <gavin.guo@canonical.com> Cc: Jay Vosburgh <jay.vosburgh@canonical.com> Cc: Mel Gorman <mgorman@techsingularity.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Kees Cook 提交于
Some hardened environments want to build kernels with slab_nomerge already set (so that they do not depend on remembering to set the kernel command line option). This is desired to reduce the risk of kernel heap overflows being able to overwrite objects from merged caches and changes the requirements for cache layout control, increasing the difficulty of these attacks. By keeping caches unmerged, these kinds of exploits can usually only damage objects in the same cache (though the risk to metadata exploitation is unchanged). Link: http://lkml.kernel.org/r/20170620230911.GA25238@beastSigned-off-by: NKees Cook <keescook@chromium.org> Cc: Daniel Micay <danielmicay@gmail.com> Cc: David Windsor <dave@nullcore.net> Cc: Eric Biggers <ebiggers3@gmail.com> Cc: Christoph Lameter <cl@linux.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Daniel Micay <danielmicay@gmail.com> Cc: David Windsor <dave@nullcore.net> Cc: Eric Biggers <ebiggers3@gmail.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@kernel.org> Cc: Mauro Carvalho Chehab <mchehab@kernel.org> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Cc: Tejun Heo <tj@kernel.org> Cc: Daniel Mack <daniel@zonque.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Helge Deller <deller@gmx.de> Cc: Rik van Riel <riel@redhat.com> Cc: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 7月, 2017 4 次提交
-
-
由 Jeff Layton 提交于
Let's try to make this extra clear for fs authors. Cc: Jan Kara <jack@suse.cz> Signed-off-by: NJeff Layton <jlayton@redhat.com>
-
由 Keerthy 提交于
The LP87565 chip is a power management IC for Portable Navigation Systems and Tablet Computing devices. It contains the following components: - Configurable Bucks(Single and multi-phase). - Configurable General Purpose Output Signals (GPO). The LP87565-Q1 variant device uses two 2-phase outputs configuration, Buck0 is master for Buck0/1 output and Buck2 is master for Buck2/3 output. Signed-off-by: NKeerthy <j-keerthy@ti.com> Acked-by: NRob Herring <robh@kernel.org> Signed-off-by: NLee Jones <lee.jones@linaro.org>
-
由 Olimpiu Dejeu 提交于
Acked-by: NRob Herring <robh@kernel.org> Reviewed-by: NJingoo Han <jingoohan1@gmail.com> Signed-off-by: NOlimpiu Dejeu <olimpiu@arcticsand.com> Signed-off-by: NLee Jones <lee.jones@linaro.org>
-
由 Charles Keepax 提交于
Link to the generic GPIO specifier bindings now that the second cell of the binding has some support in the driver. Signed-off-by: NCharles Keepax <ckeepax@opensource.wolfsonmicro.com> Signed-off-by: NLee Jones <lee.jones@linaro.org>
-
- 05 7月, 2017 3 次提交
-
-
由 Murali Karicheri 提交于
The data manual for DP83867IR/CR, SNLS484E[1], revised march 2017, advises that strapping RX_DV/RX_CTRL pin in mode 1 and 2 is not supported (see note below Table 5 (4-Level Strap Pins)). It further advises that if a board has this pin strapped in mode 1 and mode 2, then to ensure proper operation of the PHY, a software workaround must be implemented. Since it is not possible to detect in software if RX_DV/RX_CTRL pin is incorrectly strapped, add a device-tree property for the board to advertise this and allow corrective action in software. [1] http://www.ti.com/lit/ds/snls484e/snls484e.pdfSigned-off-by: NMurali Karicheri <m-karicheri2@ti.com> [nsekhar@ti.com: rebase to mainline, split documentation into separate patch] Signed-off-by: NSekhar Nori <nsekhar@ti.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Marc Gonzalez 提交于
Binding for the Sigma Designs SMP8759 SoC. Signed-off-by: NMarc Gonzalez <marc_gonzalez@sigmadesigns.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NRob Herring <robh@kernel.org>
-
由 Amir Goldstein 提交于
The inodes index feature introduces a behavior change - on mount, upper root origin file handle is verified to match the lower root dir. This implies that copied layers cannot be mounted with the inodes index feature enabled, without explicitly removing the upper dir origin xattr and the index dir. The inodes index feature is required to support: - Prevent breaking hardlinks on copy up - NFS export support (upcoming) - Overlayfs snapshots (POC) Signed-off-by: NAmir Goldstein <amir73il@gmail.com> Signed-off-by: NMiklos Szeredi <mszeredi@redhat.com>
-
- 04 7月, 2017 3 次提交
-
-
由 Chao Yu 提交于
Commit 73faec4d ("f2fs: add mount option to select fault injection ratio") and Commit 08796897 ("f2fs: add fault injection to sysfs") forget to document mount option and sysfs file. This patch fixes to document them. Signed-off-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
由 Chao Yu 提交于
Signed-off-by: NChao Yu <yuchao0@huawei.com> Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
-
由 Dmitry Monakhov 提交于
Currently all integrity prep hooks are open-coded, and if prepare fails we ignore it's code and fail bio with EIO. Let's return real error to upper layer, so later caller may react accordingly. In fact no one want to use bio_integrity_prep() w/o bio_integrity_enabled, so it is reasonable to fold it in to one function. Signed-off-by: NDmitry Monakhov <dmonakhov@openvz.org> Reviewed-by: NMartin K. Petersen <martin.petersen@oracle.com> [hch: merged with the latest block tree, return bool from bio_integrity_prep] Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
- 03 7月, 2017 10 次提交
-
-
由 Ryder Lee 提交于
Add documentation for PCIe host driver available in MT7623 series SoCs. Signed-off-by: NRyder Lee <ryder.lee@mediatek.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NRob Herring <robh@kernel.org>
-
由 Keiji Hayashibara 提交于
Add a watchdog driver for Socionext UniPhier series SoC. Note that the timeout value for this device must be a power of 2 because of the specification. Signed-off-by: NKeiji Hayashibara <hayashibara.keiji@socionext.com> Reviewed-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NWim Van Sebroeck <wim@iguana.be>
-
由 Keiji Hayashibara 提交于
Add uniphier-wdt dt-bindings documentation. Signed-off-by: NKeiji Hayashibara <hayashibara.keiji@socionext.com> Acked-by: NRob Herring <robh@kernel.org> Reviewed-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NWim Van Sebroeck <wim@iguana.be>
-
由 Peter Feiner 提交于
Adds the plumbing to disable A/D bits in the MMU based on a new role bit, ad_disabled. When A/D is disabled, the MMU operates as though A/D aren't available (i.e., using access tracking faults instead). To avoid SP -> kvm_mmu_page.role.ad_disabled lookups all over the place, A/D disablement is now stored in the SPTE. This state is stored in the SPTE by tweaking the use of SPTE_SPECIAL_MASK for access tracking. Rather than just setting SPTE_SPECIAL_MASK when an access-tracking SPTE is non-present, we now always set SPTE_SPECIAL_MASK for access-tracking SPTEs. Signed-off-by: NPeter Feiner <pfeiner@google.com> [Use role.ad_disabled even for direct (non-shadow) EPT page tables. Add documentation and a few MMU_WARN_ONs. - Paolo] Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Matteo Croce 提交于
In the IPVLAN documentation there is an example command line where the master and slave interface names are inverted. Fix the command line and also add the optional `name' keyword to better describe what the command is doing. v2: added commit message Signed-off-by: NMatteo Croce <mcroce@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Steffen Trumtrar 提交于
Document the reset lines holding the watchdog core in reset. Signed-off-by: NSteffen Trumtrar <s.trumtrar@pengutronix.de> Cc: Rob Herring <robh+dt@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-watchdog@vger.kernel.org Cc: devicetree@vger.kernel.org Acked-by: NRob Herring <robh@kernel.org> Reviewed-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NGuenter Roeck <linux@roeck-us.net> Signed-off-by: NWim Van Sebroeck <wim@iguana.be>
-
由 Wolfram Sang 提交于
It is 'R-Car', not 'RCar'. No code or binding changes, only descriptive text. Signed-off-by: NWolfram Sang <wsa+renesas@sang-engineering.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NGeert Uytterhoeven <geert+renesas@glider.be> Acked-by: NSimon Horman <horms+renesas@verge.net.au>
-
由 John Crispin 提交于
Add support for the IPQ4019 PCIe controller. IPQ4019 supports Gen 1/2, one lane, one PCIe root complex with support for MSI and legacy interrupts, and it conforms to PCI Express Base 2.1 specification. The core init is the same as for the MSM8996, however the clocks and reset lines differ. [bhelgaas: fix qcom_pcie_get_resources_v3(), qcom_pcie_init_v3() compile issues] Signed-off-by: NJohn Crispin <john@phrozen.org> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NStanimir Varbanov <svarbanov@mm-sol.com> Acked-by: Rob Herring <robh@kernel.org> # binding
-
由 Quentin Schulz 提交于
Some boards might require to control a regulator to power the PCIe port. Add support for an optional regulator defined in Device Tree linked in the PCIe controller under `vpcie-supply`. If present, the regulator will be disabled and then enabled as part of the PCIe host initialization process and will be disabled when shutting down. Signed-off-by: NQuentin Schulz <quentin.schulz@free-electrons.com> [bhelgaas: use dev_err() instead of pr_err() in imx6_pcie_assert_core_reset()] Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NRob Herring <robh@kernel.org> Acked-by: NRichard Zhu <hongxing.zhu@nxp.com>
-
由 Linus Walleij 提交于
The Faraday FTPCI100 controller has two clock ports, PCLK and PCICLK. Add bindings for these two clocks so we can assign them in the device tree. Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NRob Herring <robh@kernel.org>
-
- 02 7月, 2017 1 次提交
-
-
由 Takashi Sakamoto 提交于
In PCM interface/protocol for userspace, parameters of runtime for PCM substream is decided by an interaction between applications and ALSA PCM core. In former commits, some tracepoints were added to probe a part of the interaction. This commit adds a documentation about the interaction and the tracepoints. Signed-off-by: NTakashi Sakamoto <o-takashi@sakamocchi.jp> Signed-off-by: NTakashi Iwai <tiwai@suse.de>
-
- 01 7月, 2017 1 次提交
-
-
由 Rafal Ozieblo 提交于
Signed-off-by: NRafal Ozieblo <rafalo@cadence.com> Acked-by: NRob Herring <robh@kernel.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 6月, 2017 2 次提交
-
-
由 Frank Rowand 提交于
Add ABI documentation for /sys/firmware/fdt Update contact email for /sys/firmware/devicetree/* and add mail list Signed-off-by: NFrank Rowand <frank.rowand@sony.com> Acked-by: NGrant Likely <grant.likely@arm.com> Signed-off-by: NRob Herring <robh@kernel.org>
-
由 Palmer Dabbelt 提交于
RISC-V systems use device tree to specify the memory layout of the system. This patch reserves the "riscv" vendor prefix, which will be used for devices that are specified by the various RISC-V ISA specifications. Signed-off-by: NPalmer Dabbelt <palmer@dabbelt.com> Signed-off-by: NRob Herring <robh@kernel.org>
-