- 21 10月, 2010 12 次提交
-
-
由 Borislav Petkov 提交于
Move toplevel sysfs class to the stub and make it available to non-modularized code too. Add proper refcounting of its users and move the registration functionality into the reference counting routines. Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
-
由 H. Peter Anvin 提交于
Checkin c957ef2c had inconsistent ordering of .data..percpu..page_aligned and .data..percpu..readmostly; the still-broken version affected x86-32 at least. The page aligned version really must be page aligned... Signed-off-by: NH. Peter Anvin <hpa@zytor.com> LKML-Reference: <1287544022.4571.7.camel@sli10-conroe.sh.intel.com> Cc: Shaohua Li <shaohua.li@intel.com> Cc: Eric Dumazet <eric.dumazet@gmail.com>
-
由 Eric Paris 提交于
Currently the roundup macro references it's arguments more than one time. This patch changes it so it will only use its arguments once. Suggested-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NEric Paris <eparis@redhat.com> Signed-off-by: NJames Morris <jmorris@namei.org>
-
由 Eric Paris 提交于
The roundup() helper function will round a given value up to a multiple of another given value. aka roundup(11, 7) would give 14 = 7 * 2. This new function does the opposite. It will round a given number down to the nearest multiple of the second number: rounddown(11, 7) would give 7. I need this in some future SELinux code and can carry the macro myself, but figured I would put it in the core kernel so others might find and use it if need be. Signed-off-by: NEric Paris <eparis@redhat.com> Signed-off-by: NJames Morris <jmorris@namei.org>
-
由 Eric Paris 提交于
The conntrack code can export the internal secid to userspace. These are dynamic, can change on lsm changes, and have no meaning in userspace. We should instead be sending lsm contexts to userspace instead. This patch sends the secctx (rather than secid) to userspace over the netlink socket. We use a new field CTA_SECCTX and stop using the the old CTA_SECMARK field since it did not send particularly useful information. Signed-off-by: NEric Paris <eparis@redhat.com> Reviewed-by: NPaul Moore <paul.moore@hp.com> Acked-by: NPatrick McHardy <kaber@trash.net> Signed-off-by: NJames Morris <jmorris@namei.org>
-
由 Eric Paris 提交于
With the (long ago) interface change to have the secid_to_secctx functions do the string allocation instead of having the caller do the allocation we lost the ability to query the security server for the length of the upcoming string. The SECMARK code would like to allocate a netlink skb with enough length to hold the string but it is just too unclean to do the string allocation twice or to do the allocation the first time and hold onto the string and slen. This patch adds the ability to call security_secid_to_secctx() with a NULL data pointer and it will just set the slen pointer. Signed-off-by: NEric Paris <eparis@redhat.com> Reviewed-by: NPaul Moore <paul.moore@hp.com> Signed-off-by: NJames Morris <jmorris@namei.org>
-
由 Eric Paris 提交于
Right now secmark has lots of direct selinux calls. Use all LSM calls and remove all SELinux specific knowledge. The only SELinux specific knowledge we leave is the mode. The only point is to make sure that other LSMs at least test this generic code before they assume it works. (They may also have to make changes if they do not represent labels as strings) Signed-off-by: NEric Paris <eparis@redhat.com> Acked-by: NPaul Moore <paul.moore@hp.com> Acked-by: NPatrick McHardy <kaber@trash.net> Signed-off-by: NJames Morris <jmorris@namei.org>
-
由 KOSAKI Motohiro 提交于
All security modules shouldn't change sched_param parameter of security_task_setscheduler(). This is not only meaningless, but also make a harmful result if caller pass a static variable. This patch remove policy and sched_param parameter from security_task_setscheduler() becuase none of security module is using it. Cc: James Morris <jmorris@namei.org> Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Signed-off-by: NJames Morris <jmorris@namei.org>
-
由 Greg Farnum 提交于
Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Greg Farnum 提交于
These facilitate preallocation of pages so that we can encode into the pagelist in an atomic context. Signed-off-by: NGreg Farnum <gregf@hq.newdream.net> Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Yehuda Sadeh 提交于
This factors out protocol and low-level storage parts of ceph into a separate libceph module living in net/ceph and include/linux/ceph. This is mostly a matter of moving files around. However, a few key pieces of the interface change as well: - ceph_client becomes ceph_fs_client and ceph_client, where the latter captures the mon and osd clients, and the fs_client gets the mds client and file system specific pieces. - Mount option parsing and debugfs setup is correspondingly broken into two pieces. - The mon client gets a generic handler callback for otherwise unknown messages (mds map, in this case). - The basic supported/required feature bits can be expanded (and are by ceph_fs_client). No functional change, aside from some subtle error handling cases that got cleaned up in the refactoring process. Signed-off-by: NSage Weil <sage@newdream.net>
-
由 Shaohua Li 提交于
Add a new readmostly percpu section and API. This can be used to avoid dirtying data lines which are generally not written to, which is especially important for data which may be accessed by processors other than the one for which the percpu area belongs to. [ hpa: moved it *after* the page-aligned section, for obvious reasons. ] Signed-off-by: NShaohua Li <shaohua.li@intel.com> LKML-Reference: <1287544022.4571.7.camel@sli10-conroe.sh.intel.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 19 10月, 2010 12 次提交
-
-
由 Venkatesh Pallipadi 提交于
s390/powerpc/ia64 have support for CONFIG_VIRT_CPU_ACCOUNTING which does the fine granularity accounting of user, system, hardirq, softirq times. Adding that option on archs like x86 will be challenging however, given the state of TSC reliability on various platforms and also the overhead it will add in syscall entry exit. Instead, add a lighter variant that only does finer accounting of hardirq and softirq times, providing precise irq times (instead of timer tick based samples). This accounting is added with a new config option CONFIG_IRQ_TIME_ACCOUNTING so that there won't be any overhead for users not interested in paying the perf penalty. This accounting is based on sched_clock, with the code being generic. So, other archs may find it useful as well. This patch just adds the core logic and does not enable this logic yet. Signed-off-by: NVenkatesh Pallipadi <venki@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1286237003-12406-5-git-send-email-venki@google.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Venkatesh Pallipadi 提交于
To account softirq time cleanly in scheduler, we need to identify whether softirq is invoked in ksoftirqd context or softirq at hardirq tail context. Add PF_KSOFTIRQD for that purpose. As all PF flag bits are currently taken, create space by moving one of the infrequently used bits (PF_THREAD_BOUND) down in task_struct to be along with some other state fields. Signed-off-by: NVenkatesh Pallipadi <venki@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1286237003-12406-4-git-send-email-venki@google.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Venkatesh Pallipadi 提交于
Just a minor cleanup patch that makes things easier to the following patches. No functionality change in this patch. Signed-off-by: NVenkatesh Pallipadi <venki@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1286237003-12406-3-git-send-email-venki@google.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Venkatesh Pallipadi 提交于
Peter Zijlstra found a bug in the way softirq time is accounted in VIRT_CPU_ACCOUNTING on this thread: http://lkml.indiana.edu/hypermail//linux/kernel/1009.2/01366.html The problem is, softirq processing uses local_bh_disable internally. There is no way, later in the flow, to differentiate between whether softirq is being processed or is it just that bh has been disabled. So, a hardirq when bh is disabled results in time being wrongly accounted as softirq. Looking at the code a bit more, the problem exists in !VIRT_CPU_ACCOUNTING as well. As account_system_time() in normal tick based accouting also uses softirq_count, which will be set even when not in softirq with bh disabled. Peter also suggested solution of using 2*SOFTIRQ_OFFSET as irq count for local_bh_{disable,enable} and using just SOFTIRQ_OFFSET while softirq processing. The patch below does that and adds API in_serving_softirq() which returns whether we are currently processing softirq or not. Also changes one of the usages of softirq_count in net/sched/cls_cgroup.c to in_serving_softirq. Looks like many usages of in_softirq really want in_serving_softirq. Those changes can be made individually on a case by case basis. Signed-off-by: NVenkatesh Pallipadi <venki@google.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1286237003-12406-2-git-send-email-venki@google.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
The use of the JUMP_LABEL() construct ends up creating endless silly wrappers, create a higher level construct to reduce this clutter. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Jason Baron <jbaron@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Paul Mackerras <paulus@samba.org> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Acked-by: NFrederic Weisbecker <fweisbec@gmail.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Trades a call + conditional + ret for an unconditional jmp. Acked-by: NFrederic Weisbecker <fweisbec@gmail.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20101014203625.501657727@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Add an interface to allow usage of jump_labels with atomic counters. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NFrederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <20101014203625.501657727@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Now that there's still only a few users around, rename things to make them more consistent. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20101014203625.448565169@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
hw_breakpoint creation needs to account stuff per-task to ensure there is always sufficient hardware resources to back these things due to ptrace. With the perf per pmu context changes the event initialization no longer has access to the event context, for the simple reason that we need to first find the pmu (result of initialization) before we can find the context. This makes hw_breakpoints unhappy, because it can no longer do per task accounting, cure this by frobbing a task pointer in the event::hw bits for now... Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Cc: Frederic Weisbecker <fweisbec@gmail.com> LKML-Reference: <20101014203625.391543667@chello.nl> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Peter Zijlstra 提交于
Provide a mechanism that allows running code in IRQ context. It is most useful for NMI code that needs to interact with the rest of the system -- like wakeup a task to drain buffers. Perf currently has such a mechanism, so extract that and provide it as a generic feature, independent of perf so that others may also benefit. The IRQ context callback is generated through self-IPIs where possible, or on architectures like powerpc the decrementer (the built-in timer facility) is set to generate an interrupt immediately. Architectures that don't have anything like this get to do with a callback from the timer tick. These architectures can call irq_work_run() at the tail of any IRQ handlers that might enqueue such work (like the perf IRQ handler) to avoid undue latencies in processing the work. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NKyle McMartin <kyle@mcmartin.ca> Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> [ various fixes ] Signed-off-by: NHuang Ying <ying.huang@intel.com> LKML-Reference: <1287036094.7768.291.camel@yhuang-dev> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Hitoshi Mitake 提交于
Current lockdep_map only caches one class with subclass == 0, and looks up hash table of classes when subclass != 0. It seems that this has no problem because the case of subclass != 0 is rare. But locks of struct rq are acquired with subclass == 1 when task migration is executed. Task migration is high frequent event, so I modified lockdep to cache subclasses. I measured the score of perf bench sched messaging. This patch has slightly but certain (order of milli seconds or 10 milli seconds) effect when lots of tasks are running. I'll show the result in the tail of this description. NR_LOCKDEP_CACHING_CLASSES specifies how many classes can be cached in the instances of lockdep_map. I discussed with Peter Zijlstra in LinuxCon Japan about this approach and he taught me that caching every subclasses(8) is cleary waste of memory. So number of cached classes should be configurable. === Score comparison of benchmarks === # "min" means best score, and "max" means worst score for i in `seq 1 10`; do ./perf bench -f simple sched messaging; done before: min: 0.565000, max: 0.583000, avg: 0.572500 after: min: 0.559000, max: 0.568000, avg: 0.563300 # with more processes for i in `seq 1 10`; do ./perf bench -f simple sched messaging -g 40; done before: min: 2.274000, max: 2.298000, avg: 2.286300 after: min: 2.242000, max: 2.270000, avg: 2.259700 Signed-off-by: NHitoshi Mitake <mitake@dcl.info.waseda.ac.jp> Cc: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1286269311-28336-2-git-send-email-mitake@dcl.info.waseda.ac.jp> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 17 10月, 2010 11 次提交
-
-
由 Nishanth Menon 提交于
SoCs have a standard set of tuples consisting of frequency and voltage pairs that the device will support per voltage domain. These are called Operating Performance Points or OPPs. The actual definitions of OPP varies over silicon versions. For a specific domain, we can have a set of {frequency, voltage} pairs. As the kernel boots and more information is available, a default set of these are activated based on the precise nature of device. Further on operation, based on conditions prevailing in the system (such as temperature), some OPP availability may be temporarily controlled by the SoC frameworks. To implement an OPP, some sort of power management support is necessary hence this library depends on CONFIG_PM. Contributions include: Sanjeev Premi for the initial concept: http://patchwork.kernel.org/patch/50998/ Kevin Hilman for converting original design to device-based. Kevin Hilman and Paul Walmsey for cleaning up many of the function abstractions, improvements and data structure handling. Romit Dasgupta for using enums instead of opp pointers. Thara Gopinath, Eduardo Valentin and Vishwanath BS for fixes and cleanups. Linus Walleij for recommending this layer be made generic for usage in other architectures beyond OMAP and ARM. Mark Brown, Andrew Morton, Rafael J. Wysocki, Paul E. McKenney for valuable improvements. Discussions and comments from: http://marc.info/?l=linux-omap&m=126033945313269&w=2 http://marc.info/?l=linux-omap&m=125482970102327&w=2 http://marc.info/?t=125809247500002&r=1&w=2 http://marc.info/?l=linux-omap&m=126025973426007&w=2 http://marc.info/?t=128152609200064&r=1&w=2 http://marc.info/?t=128468723000002&r=1&w=2 incorporated. v1: http://marc.info/?t=128468723000002&r=1&w=2Signed-off-by: NNishanth Menon <nm@ti.com> Signed-off-by: NKevin Hilman <khilman@deeprootsystems.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 James Hogan 提交于
If the device which fails to resume is part of a loadable kernel module it won't be checked at startup against the magic number stored in the RTC. Add a read-only sysfs attribute /sys/power/pm_trace_dev_match which contains a list of newline separated devices (usually just the one) which currently match the last magic number. This allows the device which is failing to resume to be found after the modules are loaded again. Signed-off-by: NJames Hogan <james@albanarts.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Rafael J. Wysocki 提交于
If there is a wakeup event during the freezing of tasks, suspend or hibernation will fail anyway. Since try_to_freeze_tasks() can take up to 20 seconds to complete or fail, aborting it as soon as a wakeup event is detected improves the worst case wakeup latency. Based on a patch from Arve Hjønnevåg. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Acked-by: NPavel Machek <pavel@ucw.cz>
-
由 Ming Lei 提交于
The patch "PM / Runtime: Implement autosuspend support" introduces "autosuspend" facility for runtime PM, but misses helper function of pm_request_autosuspend, so add it. Signed-off-by: NMing Lei <tom.leiming@gmail.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Alan Stern 提交于
This patch (as1427) implements the "autosuspend" facility for runtime PM. A few new fields are added to the dev_pm_info structure and several new PM helper functions are defined, for telling the PM core whether or not a device uses autosuspend, for setting the autosuspend delay, and for marking periods of device activity. Drivers that do not want to use autosuspend can continue using the same helper functions as before; their behavior will not change. In addition, drivers supporting autosuspend can also call the old helper functions to get the old behavior. The details are all explained in Documentation/power/runtime_pm.txt and Documentation/ABI/testing/sysfs-devices-power. Signed-off-by: NAlan Stern <stern@rowland.harvard.edu> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Alan Stern 提交于
Some devices, such as USB interfaces, cannot be power-managed independently of their parents, i.e., they cannot be put in low power while the parent remains at full power. This patch (as1425) creates a new "no_callbacks" flag, which tells the PM core not to invoke the runtime-PM callback routines for the such devices but instead to assume that the callbacks always succeed. In addition, the non-debugging runtime-PM sysfs attributes for the devices are removed, since they are pretty much meaningless. The advantage of this scheme comes not so much from avoiding the callbacks themselves, but rather from the fact that without the need for a process context in which to run the callbacks, more work can be done in interrupt context. Signed-off-by: NAlan Stern <stern@rowland.harvard.edu> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Alan Stern 提交于
This patch (as1424) combines the various public entry points for the runtime PM routines into three simple functions: one for idle, one for suspend, and one for resume. A new bitflag specifies whether or not to increment or decrement the usage_count field. The new entry points are named __pm_runtime_idle, __pm_runtime_suspend, and __pm_runtime_resume, to reflect that they are trampolines. Simultaneously, the corresponding internal routines are renamed to rpm_idle, rpm_suspend, and rpm_resume. Signed-off-by: NAlan Stern <stern@rowland.harvard.edu> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Alan Stern 提交于
The "from_wq" argument in __pm_runtime_suspend() and __pm_runtime_resume() supposedly indicates whether or not the function was called by the PM workqueue thread, but in fact it isn't always used this way. It really indicates whether or not the function should return early if the requested operation is already in progress. Along with this badly-named boolean argument, later patches in this series will add several other boolean arguments to these functions and others. Therefore this patch (as1422) begins the conversion process by replacing from_wq with a bitflag argument. The same bitflags are also used in __pm_runtime_get() and __pm_runtime_put(), where they indicate whether or not the operation should be asynchronous. Signed-off-by: NAlan Stern <stern@rowland.harvard.edu> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Alan Stern 提交于
This patch (as1420) adds sysfs_merge_group() and sysfs_unmerge_group() functions, allowing drivers easily to add and remove sets of attributes to a pre-existing attribute group directory. Signed-off-by: NAlan Stern <stern@rowland.harvard.edu> Acked-by: NGreg Kroah-Hartman <gregkh@suse.de> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Rafael J. Wysocki 提交于
There is a potential issue with the asynchronous suspend code that a device driver suspending asynchronously may not notice that it should back off. There are two failing scenarions, (1) when the driver is waiting for a driver suspending synchronously to complete and that second driver returns error code, in which case async_error won't be set and the waiting driver will continue suspending and (2) after the driver has called device_pm_wait_for_dev() and the waited for driver returns error code, in which case the caller of device_pm_wait_for_dev() will not know that there was an error and will continue suspending. To fix this issue make __device_suspend() set async_error, so async_suspend() doesn't need to set it any more, and make device_pm_wait_for_dev() return async_error, so that its callers can check whether or not they should continue suspending. No more changes are necessary, since device_pm_wait_for_dev() is not used by any drivers' suspend routines. Reported-by: NColin Cross <ccross@android.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Acked-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Rafael J. Wysocki 提交于
Introduce struct wakeup_source for representing system wakeup sources within the kernel and for collecting statistics related to them. Make the recently introduced helper functions pm_wakeup_event(), pm_stay_awake() and pm_relax() use struct wakeup_source objects internally, so that wakeup statistics associated with wakeup devices can be collected and reported in a consistent way (the definition of pm_relax() is changed, which is harmless, because this function is not called directly by anyone yet). Introduce new wakeup-related sysfs device attributes in /sys/devices/.../power for reporting the device wakeup statistics. Change the global wakeup events counters event_count and events_in_progress into atomic variables, so that it is not necessary to acquire a global spinlock in pm_wakeup_event(), pm_stay_awake() and pm_relax(), which should allow us to avoid lock contention in these functions on SMP systems with many wakeup devices. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Acked-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 16 10月, 2010 2 次提交
-
-
由 Eric Paris 提交于
We currently have a kernel internal type called aligned_u64 which aligns __u64's on 8 bytes boundaries even on systems which would normally align them on 4 byte boundaries. This patch creates a new type __aligned_u64 which does the same thing but which is exposed to userspace rather than being kernel internal. [akpm: merge early as both the net and audit trees want this] [akpm@linux-foundation.org: enhance the comment describing the reasons for using aligned_u64. Via Andreas and Andi.] Based-on-patch-by: NAndreas Gruenbacher <agruen@suse.de> Signed-off-by: NEric Paris <eparis@redhat.com> Cc: Jan Engelhardt <jengelh@medozas.de> Cc: David Miller <davem@davemloft.net> Cc: Andi Kleen <andi@firstfloor.org> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Thomas Gleixner 提交于
This file is unused since the apic unification in 2.6.29, but nobody noticed. Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 15 10月, 2010 3 次提交
-
-
由 Anand Gadiyar 提交于
Commit e9677b3c (oprofile, ARM: Use oprofile_arch_exit() to cleanup on failure) caused oprofile_perf_exit to be called in the cleanup path of oprofile_perf_init. The __exit tag for oprofile_perf_exit should therefore be dropped. The same has to be done for exit_driverfs as well, as this function is called from oprofile_perf_exit. Else, we get the following two linker errors. LD .tmp_vmlinux1 `oprofile_perf_exit' referenced in section `.init.text' of arch/arm/oprofile/built-in.o: defined in discarded section `.exit.text' of arch/arm/oprofile/built-in.o make: *** [.tmp_vmlinux1] Error 1 LD .tmp_vmlinux1 `exit_driverfs' referenced in section `.text' of arch/arm/oprofile/built-in.o: defined in discarded section `.exit.text' of arch/arm/oprofile/built-in.o make: *** [.tmp_vmlinux1] Error 1 Signed-off-by: NAnand Gadiyar <gadiyar@ti.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NRobert Richter <robert.richter@amd.com>
-
由 Linus Torvalds 提交于
Tony Luck reports that the addition of the access_ok() check in commit 0eead9ab ("Don't dump task struct in a.out core-dumps") broke the ia64 compile due to missing the necessary header file includes. Rather than add yet another include (<asm/unistd.h>) to make everything happy, just uninline the silly core dump helper functions and move the bodies to fs/exec.c where they make a lot more sense. dump_seek() in particular was too big to be an inline function anyway, and none of them are in any way performance-critical. And we really don't need to mess up our include file headers more than they already are. Reported-and-tested-by: NTony Luck <tony.luck@gmail.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Linus Torvalds 提交于
akiphie points out that a.out core-dumps have that odd task struct dumping that was never used and was never really a good idea (it goes back into the mists of history, probably the original core-dumping code). Just remove it. Also do the access_ok() check on dump_write(). It probably doesn't matter (since normal filesystems all seem to do it anyway), but he points out that it's normally done by the VFS layer, so ... [ I suspect that we should possibly do "vfs_write()" instead of calling ->write directly. That also does the whole fsnotify and write statistics thing, which may or may not be a good idea. ] And just to be anal, do this all for the x86-64 32-bit a.out emulation code too, even though it's not enabled (and won't currently even compile) Reported-by: Nakiphie <akiphie@lavabit.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-