- 04 4月, 2017 2 次提交
-
-
由 Alexey Dobriyan 提交于
Hash size can't negative so "unsigned int" is logically correct. Propagate "unsigned int" to loop counters. Space savings: add/remove: 0/0 grow/shrink: 2/2 up/down: 6/-18 (-12) function old new delta flow_cache_flush_tasklet 362 365 +3 __flow_cache_shrink 333 336 +3 flow_cache_cpu_up_prep 178 171 -7 flow_cache_lookup 1159 1148 -11 Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Alexey Dobriyan 提交于
Flow keys aren't 4GB+ numbers so 64-bit arithmetic is excessive. Space savings (I'm not sure what CSWTCH is): add/remove: 0/0 grow/shrink: 0/2 up/down: 0/-48 (-48) function old new delta flow_cache_lookup 1163 1159 -4 CSWTCH 75997 75953 -44 Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 23 11月, 2016 1 次提交
-
-
由 Miroslav Urbanek 提交于
The threshold for OOM protection is too small for systems with large number of CPUs. Applications report ENOBUFs on connect() every 10 minutes. The problem is that the variable net->xfrm.flow_cache_gc_count is a global counter while the variable fc->high_watermark is a per-CPU constant. Take the number of CPUs into account as well. Fixes: 6ad3122a ("flowcache: Avoid OOM condition under preasure") Reported-by: NLukáš Koldrt <lk@excello.cz> Tested-by: NJan Hejl <jh@excello.cz> Signed-off-by: NMiroslav Urbanek <mu@miroslavurbanek.com> Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
-
- 10 11月, 2016 1 次提交
-
-
Install the callbacks via the state machine. Use multi state support to avoid custom list handling for the multiple instances. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: netdev@vger.kernel.org Cc: rt@linutronix.de Cc: "David S. Miller" <davem@davemloft.net> Link: http://lkml.kernel.org/r/20161103145021.28528-10-bigeasy@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 17 3月, 2016 1 次提交
-
-
由 Steffen Klassert 提交于
We can hit an OOM condition if we are under presure because we can not free the entries in gc_list fast enough. So add a counter for the not yet freed entries in the gc_list and refuse new allocations if the value is too high. Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
-
- 02 9月, 2015 2 次提交
-
-
由 David S. Miller 提交于
These cannot live in net/core/flow.c which only builds when XFRM is enabled. Reported-by: Nkbuild test robot <fengguang.wu@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Tom Herbert 提交于
Create __get_hash_from_flowi6 and __get_hash_from_flowi4 to get the flow keys and hash based on flowi structures. These are called by __skb_get_hash_flowi6 and __skb_get_hash_flowi4. Also, created get_hash_from_flowi6 and get_hash_from_flowi4 which can be called when just the hash value for a flowi is needed. Signed-off-by: NTom Herbert <tom@herbertland.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 06 2月, 2015 1 次提交
-
-
由 Miroslav Urbanek 提交于
flow_cache_flush_task references a structure member flow_cache_gc_work where it should reference flow_cache_flush_task instead. Kernel panic occurs on kernels using IPsec during XFRM garbage collection. The garbage collection interval can be shortened using the following sysctl settings: net.ipv4.xfrm4_gc_thresh=4 net.ipv6.xfrm6_gc_thresh=4 With the default settings, our productions servers crash approximately once a week. With the settings above, they crash immediately. Fixes: ca925cf1 ("flowcache: Make flow cache name space aware") Reported-by: NTomáš Charvát <tc@excello.cz> Tested-by: NJan Hejl <jh@excello.cz> Signed-off-by: NMiroslav Urbanek <mu@miroslavurbanek.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 3月, 2014 1 次提交
-
-
由 Srivatsa S. Bhat 提交于
Subsystems that want to register CPU hotplug callbacks, as well as perform initialization for the CPUs that are already online, often do it as shown below: get_online_cpus(); for_each_online_cpu(cpu) init_cpu(cpu); register_cpu_notifier(&foobar_cpu_notifier); put_online_cpus(); This is wrong, since it is prone to ABBA deadlocks involving the cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently with CPU hotplug operations). Instead, the correct and race-free way of performing the callback registration is: cpu_notifier_register_begin(); for_each_online_cpu(cpu) init_cpu(cpu); /* Note the use of the double underscored version of the API */ __register_cpu_notifier(&foobar_cpu_notifier); cpu_notifier_register_done(); Fix the code in net/core/flow.c by using this latter form of callback registration. Cc: Li RongQing <roy.qing.li@gmail.com> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Ingo Molnar <mingo@kernel.org> Acked-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 13 3月, 2014 1 次提交
-
-
由 Steffen Klassert 提交于
We leak an active timer, the hotcpu notifier and all allocated resources when we exit a namespace. Fix this by introducing a flow_cache_fini() function where we release the resources before we exit. Fixes: ca925cf1 ("flowcache: Make flow cache name space aware") Reported-by: NJakub Kicinski <moorray3@wp.pl> Tested-by: NJakub Kicinski <moorray3@wp.pl> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Fan Du <fan.du@windriver.com> Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com> Acked-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 3月, 2014 1 次提交
-
-
由 Eric Dumazet 提交于
It is not legal to create multiple kmem_cache having the same name. flowcache can use a single kmem_cache, no need for a per netns one. Fixes: ca925cf1 ("flowcache: Make flow cache name space aware") Reported-by: NJakub Kicinski <moorray3@wp.pl> Tested-by: NJakub Kicinski <moorray3@wp.pl> Tested-by: NFan Du <fan.du@windriver.com> Signed-off-by: NEric Dumazet <edumazet@google.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 12 2月, 2014 1 次提交
-
-
由 Fan Du 提交于
Inserting a entry into flowcache, or flushing flowcache should be based on per net scope. The reason to do so is flushing operation from fat netns crammed with flow entries will also making the slim netns with only a few flow cache entries go away in original implementation. Since flowcache is tightly coupled with IPsec, so it would be easier to put flow cache global parameters into xfrm namespace part. And one last thing needs to do is bumping flow cache genid, and flush flow cache should also be made in per net style. Signed-off-by: NFan Du <fan.du@windriver.com> Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
-
- 15 7月, 2013 1 次提交
-
-
由 Paul Gortmaker 提交于
The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. This removes all the net/* uses of the __cpuinit macros from all C files. [1] https://lkml.org/lkml/2013/5/20/589 Cc: "David S. Miller" <davem@davemloft.net> Cc: netdev@vger.kernel.org Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 30 3月, 2013 2 次提交
-
-
由 Li RongQing 提交于
replace per_cpu with per_cpu_ptr to save conversion between address and pointer Signed-off-by: NLi RongQing <roy.qing.li@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Li RongQing 提交于
flush_tasklet is not percpu var, and percpu is percpu var, and this_cpu_ptr(&info->cache->percpu->flush_tasklet) is not equal to &this_cpu_ptr(info->cache->percpu)->flush_tasklet 1f743b07(use this_cpu_ptr per-cpu helper) introduced this bug. Signed-off-by: NLi RongQing <roy.qing.li@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 21 3月, 2013 1 次提交
-
-
由 Chris Metcalf 提交于
Previously, if you did an "ifconfig down" or similar on one core, and the kernel had CONFIG_XFRM enabled, every core would be interrupted to check its percpu flow list for items that could be garbage collected. With this change, we generate a mask of cores that actually have any percpu items, and only interrupt those cores. When we are trying to isolate a set of cpus from interrupts, this is important to do. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 28 2月, 2013 1 次提交
-
-
由 Sasha Levin 提交于
I'm not sure why, but the hlist for each entry iterators were conceived list_for_each_entry(pos, head, member) The hlist ones were greedy and wanted an extra parameter: hlist_for_each_entry(tpos, pos, head, member) Why did they need an extra pos parameter? I'm not quite sure. Not only they don't really need it, it also prevents the iterator from looking exactly like the list iterator, which is unfortunate. Besides the semantic patch, there was some manual work required: - Fix up the actual hlist iterators in linux/list.h - Fix up the declaration of other iterators based on the hlist ones. - A very small amount of places were using the 'node' parameter, this was modified to use 'obj->member' instead. - Coccinelle didn't handle the hlist_for_each_entry_safe iterator properly, so those had to be fixed up manually. The semantic patch which is mostly the work of Peter Senna Tschudin is here: @@ iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host; type T; expression a,c,d,e; identifier b; statement S; @@ -T b; <+... when != b ( hlist_for_each_entry(a, - b, c, d) S | hlist_for_each_entry_continue(a, - b, c) S | hlist_for_each_entry_from(a, - b, c) S | hlist_for_each_entry_rcu(a, - b, c, d) S | hlist_for_each_entry_rcu_bh(a, - b, c, d) S | hlist_for_each_entry_continue_rcu_bh(a, - b, c) S | for_each_busy_worker(a, c, - b, d) S | ax25_uid_for_each(a, - b, c) S | ax25_for_each(a, - b, c) S | inet_bind_bucket_for_each(a, - b, c) S | sctp_for_each_hentry(a, - b, c) S | sk_for_each(a, - b, c) S | sk_for_each_rcu(a, - b, c) S | sk_for_each_from -(a, b) +(a) S + sk_for_each_from(a) S | sk_for_each_safe(a, - b, c, d) S | sk_for_each_bound(a, - b, c) S | hlist_for_each_entry_safe(a, - b, c, d, e) S | hlist_for_each_entry_continue_rcu(a, - b, c) S | nr_neigh_for_each(a, - b, c) S | nr_neigh_for_each_safe(a, - b, c, d) S | nr_node_for_each(a, - b, c) S | nr_node_for_each_safe(a, - b, c, d) S | - for_each_gfn_sp(a, c, d, b) S + for_each_gfn_sp(a, c, d) S | - for_each_gfn_indirect_valid_sp(a, c, d, b) S + for_each_gfn_indirect_valid_sp(a, c, d) S | for_each_host(a, - b, c) S | for_each_host_safe(a, - b, c, d) S | for_each_mesh_entry(a, - b, c, d) S ) ...+> [akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c] [akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c] [akpm@linux-foundation.org: checkpatch fixes] [akpm@linux-foundation.org: fix warnings] [akpm@linux-foudnation.org: redo intrusive kvm changes] Tested-by: NPeter Senna Tschudin <peter.senna@gmail.com> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: NSasha Levin <sasha.levin@oracle.com> Cc: Wu Fengguang <fengguang.wu@intel.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Gleb Natapov <gleb@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 23 1月, 2013 1 次提交
-
-
由 YOSHIFUJI Hideaki / 吉藤英明 提交于
Signed-off-by: NYOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 11月, 2012 1 次提交
-
-
由 Shan Wei 提交于
flush_tasklet is a struct, not a pointer in percpu var. so use this_cpu_ptr to get the member pointer. Signed-off-by: NShan Wei <davidshan@tencent.com> Reviewed-by: NChristoph Lameter <cl@linux.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 12月, 2011 1 次提交
-
-
由 Steffen Klassert 提交于
flow_cach_flush() might sleep but can be called from atomic context via the xfrm garbage collector. So add a flow_cache_flush_deferred() function and use this if the xfrm garbage colector is invoked from within the packet path. Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com> Acked-by: NTimo Teräs <timo.teras@iki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 18 10月, 2011 1 次提交
-
-
由 huajun li 提交于
While preparing net flow caches, once a fail may cause potential memory leak , fix it. Signed-off-by: NHuajun Li <huajun.li.lee@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 17 9月, 2011 1 次提交
-
-
由 dpward 提交于
With the conversion of struct flowi to a union of AF-specific structs, some operations on the flow cache need to account for the exact size of the key. Signed-off-by: NDavid Ward <david.ward@ll.mit.edu> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 9月, 2011 1 次提交
-
-
由 dpward 提交于
flow_cache_lookup will return a cached object (or null pointer) that the resolver (i.e. xfrm_policy_lookup) previously found for another namespace using the same key/family/dir. Instead, make the namespace part of what identifies entries in the cache. As before, flow_entry_valid will return 0 for entries where the namespace has been deleted, and they will be removed from the cache the next time flow_cache_gc_task is run. Reported-by: NAndrew Dickinson <whydna@whydna.net> Signed-off-by: NDavid Ward <david.ward@ll.mit.edu> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 27 7月, 2011 1 次提交
-
-
由 Arun Sharma 提交于
This allows us to move duplicated code in <asm/atomic.h> (atomic_inc_not_zero() for now) to <linux/atomic.h> Signed-off-by: NArun Sharma <asharma@fb.com> Reviewed-by: NEric Dumazet <eric.dumazet@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: David Miller <davem@davemloft.net> Cc: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: NMike Frysinger <vapier@gentoo.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 23 2月, 2011 1 次提交
-
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 24 9月, 2010 1 次提交
-
-
由 Eric Dumazet 提交于
Change "return (EXPR);" to "return EXPR;" return is not a function, parentheses are not required. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 14 9月, 2010 1 次提交
-
-
由 Eric Dumazet 提交于
Allocate hash tables for every online cpus, not every possible ones. NUMA aware allocations. Dont use a full page on arches where PAGE_SIZE > 1024*sizeof(void *) misc: __percpu , __read_mostly, __cpuinit annotations flow_compare_t is just an "unsigned long" Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 13 7月, 2010 1 次提交
-
-
由 Eric Dumazet 提交于
CodingStyle cleanups EXPORT_SYMBOL should immediately follow the symbol declaration. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 29 6月, 2010 1 次提交
-
-
由 Eric Dumazet 提交于
use this_cpu_ptr(p) instead of per_cpu_ptr(p, smp_processor_id()) Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 4月, 2010 2 次提交
-
-
由 Timo Teräs 提交于
Speed up lookups by freeing flow cache entries later. After virtualizing flow cache entry operations, the flow cache may now end up calling policy or bundle destructor which can be slowish. As gc_list is more effective with double linked list, the flow cache is converted to use common hlist and list macroes where appropriate. Signed-off-by: NTimo Teras <timo.teras@iki.fi> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Timo Teräs 提交于
This allows to validate the cached object before returning it. It also allows to destruct object properly, if the last reference was held in flow cache. This is also a prepartion for caching bundles in the flow cache. In return for virtualizing the methods, we save on: - not having to regenerate the whole flow cache on policy removal: each flow matching a killed policy gets refreshed as the getter function notices it smartly. - we do not have to call flow_cache_flush from policy gc, since the flow cache now properly deletes the object if it had any references Signed-off-by: NTimo Teras <timo.teras@iki.fi> Acked-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 02 4月, 2010 1 次提交
-
-
由 Timo Teräs 提交于
Group all per-cpu data to one structure instead of having many globals. Also prepare the internals so that we can have multiple instances of the flow cache if needed. Only the kmem_cache is left as a global as all flow caches share the same element size, and benefit from using a common cache. Signed-off-by: NTimo Teras <timo.teras@iki.fi> Acked-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 11月, 2008 1 次提交
-
-
由 Alexey Dobriyan 提交于
Pass netns to xfrm_lookup()/__xfrm_lookup(). For that pass netns to flow_cache_lookup() and resolver callback. Take it from socket or netdevice. Stub DECnet to init_net. Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 11月, 2008 1 次提交
-
-
由 Alexey Dobriyan 提交于
It's called from __init code only. And__devinit in generic networking code is pretty strange :^) Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 6月, 2008 1 次提交
-
-
由 Jens Axboe 提交于
It's never used and the comments refer to nonatomic and retry interchangably. So get rid of it. Acked-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 19 4月, 2008 1 次提交
-
-
由 Matthew Wilcox 提交于
None of these files use any of the functionality promised by asm/semaphore.h. It's possible that they rely on it dragging in some unrelated header file, but I can't build all these files, so we'll have fix any build failures as they come up. Signed-off-by: NMatthew Wilcox <willy@linux.intel.com>
-
- 08 2月, 2008 2 次提交
-
-
由 Eric Dumazet 提交于
1) We can shrink sizeof(struct flow_cache_entry) by 8 bytes on 64bit arches. 2) No need to align these structures to hardware cache lines, this only waste ram for very litle gain. Signed-off-by: NEric Dumazet <dada1@cosmosbay.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
We use a percpu variable named flow_hash_info, which holds 12 bytes. It is currently marked as ____cacheline_aligned, which makes linker skip space to properly align this variable. Before : c065cc90 D per_cpu__softnet_data c065cd00 d per_cpu__flow_tables <Here, hole of 124 bytes> c065cd80 d per_cpu__flow_hash_info <Here, hole of 116 bytes> c065ce00 d per_cpu__flow_flush_tasklets c065ce14 d per_cpu__rt_cache_stat This alignement is quite unproductive, and removing it reduces the size of percpu data (by 240 bytes on my x86 machine), and improves performance (flow_tables & flow_hash_info can share a single cache line) After patch : c065cc04 D per_cpu__softnet_data c065cc4c d per_cpu__flow_tables c065cc50 d per_cpu__flow_hash_info c065cc5c d per_cpu__flow_flush_tasklets c065cc70 d per_cpu__rt_cache_stat Signed-off-by: NEric Dumazet <dada1@cosmosbay.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 29 1月, 2008 1 次提交
-
-
由 Pavel Emelyanov 提交于
Many-many code in the kernel initialized the timer->function and timer->data together with calling init_timer(timer). There is already a helper for this. Use it for networking code. The patch is HUGE, but makes the code 130 lines shorter (98 insertions(+), 228 deletions(-)). Signed-off-by: NPavel Emelyanov <xemul@openvz.org> Acked-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 26 1月, 2008 1 次提交
-
-
由 Gautham R Shenoy 提交于
Replace all lock_cpu_hotplug/unlock_cpu_hotplug from the kernel and use get_online_cpus and put_online_cpus instead as it highlights the refcount semantics in these operations. The new API guarantees protection against the cpu-hotplug operation, but it doesn't guarantee serialized access to any of the local data structures. Hence the changes needs to be reviewed. In case of pseries_add_processor/pseries_remove_processor, use cpu_maps_update_begin()/cpu_maps_update_done() as we're modifying the cpu_present_map there. Signed-off-by: NGautham R Shenoy <ego@in.ibm.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-