- 02 12月, 2010 4 次提交
-
-
由 Kanigeri, Hari 提交于
In the current mailbox driver, the mailbox internal pointer for callback can be directly manipulated by the Users, so a second User can easily corrupt the first user's callback pointer. The initial effort to correct this issue can be referred here: https://patchwork.kernel.org/patch/107520/ Along with fixing the above stated issue, this patch adds the flexibility option to register notifications from multiple readers to the events received on a mailbox instance. The discussion regarding this can be referred here. http://www.mail-archive.com/linux-omap@vger.kernel.org/msg30671.htmlSigned-off-by: NHari Kanigeri <h-kanigeri2@ti.com> Signed-off-by: NFernando Guzman Lugo <x0095840@ti.com> Acked-by: NHiroshi Doyu <hiroshi.doyu@nokia.com>
-
由 Kanigeri, Hari 提交于
Schedule the Tasklet to send only when mailbox fifo is full and there are pending messages in kfifo, else send the message directly in the Process context. This would avoid needless scheduling of Tasklet for every message transfer Signed-off-by: NHari Kanigeri <h-kanigeri2@ti.com> Acked-by: NHiroshi Doyu <hiroshi.doyu@nokia.com>
-
由 Kanigeri, Hari 提交于
Fix the following checkpatch warnings observed in mailbox module. WARNING: please, no space for starting a line, excluding comments + fail_alloc_rxq:$ WARNING: please, no space for starting a line, excluding comments + fail_alloc_txq:$ WARNING: please, no space for starting a line, excluding comments + fail_request_irq:$ WARNING: line over 80 characters + mbox_kfifo_size = max_t(unsigned int, mbox_kfifo_size, sizeof(mbox_msg_t)); Signed-off-by: NHari Kanigeri <h-kanigeri2@ti.com> Acked-by: NHiroshi Doyu <hiroshi.doyu@nokia.com>
-
由 Fernando Guzman Lugo 提交于
The variable rq_full flag is a global variable, so if there are multiple mailbox users there will be conflicts. Now there is a full flag per mailbox queue. Reported-by: NOhad Ben-Cohen <ohad@wizery.com> Signed-off-by: NFernando Guzman Lugo <x0095840@ti.com> Signed-off-by: NHari Kanigeri <h-kanigeri2@ti.com> Acked-by: NHiroshi Doyu <hiroshi.doyu@nokia.com>
-
- 04 8月, 2010 12 次提交
-
-
由 Felipe Contreras 提交于
Remove kernel.h and module.h since they are not used correctly anyway. Also, remove device.h since it comes along with platform_device.h (always will I guess). Signed-off-by: NFelipe Contreras <felipe.contreras@gmail.com> Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com>
-
由 Felipe Contreras 提交于
No need to dynamically register mailboxes one by one. Signed-off-by: NFelipe Contreras <felipe.contreras@gmail.com> Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com>
-
由 Hiroshi DOYU 提交于
With this patch, you'll get the following sysfs directories. This structure implies that a single platform device, "omap2-mailbox" holds multiple logical mbox instances. This could be the base to add sysfs files for each logical mboxes. Then userland application can access a mbox through sysfs entries if necessary(ex: setting kfifo size dynamically) ~# tree -d -L 2 /sys/devices/platform/omap2-mailbox/ /sys/devices/platform/omap2-mailbox/ |-- driver -> ../../../bus/platform/drivers/omap2-mailbox |-- mbox | |-- dsp <- they are each instances of logical mailbox. | |-- ducati | |-- iva2 | |-- mbox01 | |-- mbox02 | |-- mbox03 | |-- ..... | `-- tesla |-- power `-- subsystem -> ../../../bus/platform This was wrongly dropped by: commit c7c158e5Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com>
-
由 Ohad Ben-Cohen 提交于
The underlying buffering implementation of mailbox is converted from block API to kfifo due to the simplicity and speed of kfifo. The default size of the kfifo buffer is set to 256 bytes. This value is configurable at compile time (via CONFIG_OMAP_MBOX_KFIFO_SIZE), and can be changed at runtime (via the mbox_kfifo_size module parameter). Signed-off-by: NOhad Ben-Cohen <ohad@wizery.com> Signed-off-by: NHari Kanigeri <h-kanigeri2@ti.com> Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com>
-
由 Ohad Ben-Cohen 提交于
Signed-off-by: NOhad Ben-Cohen <ohad@wizery.com> Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com>
-
由 Ohad Ben-Cohen 提交于
use multiple MODULE_AUTHOR lines for multiple authors Signed-off-by: NOhad Ben-Cohen <ohad@wizery.com> Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com>
-
由 Ohad Ben-Cohen 提交于
rwlocks are slower and have potential starvation issues therefore spinlocks are generally preferred. see also: http://lwn.net/Articles/364583/Signed-off-by: NOhad Ben-Cohen <ohad@wizery.com> Signed-off-by: NKanigeri Hari <h-kanigeri2@ti.com> Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com>
-
由 Fernando Guzman Lugo 提交于
when blk_get_request fails to get the request it is returning without read the message from the mailbox fifo, then when it leaves the isr and interruption is trigger again and again and the workqueue which get elements from the request queue is never executed and the kernel is stuck and shows a softlockup message. Now the mailbox interrupt is disabled when request queue is full and enabled when it pop a elememt form the request queue. Signed-off-by: NFernando Guzman Lugo <x0095840@ti.com> Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com>
-
由 Hiroshi DOYU 提交于
mailbox startup and shutdown are being executed against a single H/W module, and a mailbox H/W module is totally __independent__ of the registration of logical mailboxes. So, an independent mutext should be used for startup and shutdown. Signed-off-by: NFernando Guzman Lugo <x0095840@ti.com> Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com>
-
由 Fernando Guzman Lugo 提交于
This patch checks if the mailbox user has assinged a valid callback fuction before calling it. Signed-off-by: NFernando Guzman Lugo <x0095840@ti.com> Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com>
-
由 Fernando Guzman Lugo 提交于
flush pending deferred works before freeing blk_queue to prevent any attempt of access to blk_queue after it was freed Signed-off-by: NFernando Guzman Lugo <x0095840@ti.com> Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com>
-
由 Fernando Guzman Lugo 提交于
Free interrupt before freeing blk_queue to avoid any attempt of access to blk_queue after it was freed. Signed-off-by: NFernando Guzman Lugo <x0095840@ti.com> Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com>
-
- 30 3月, 2010 1 次提交
-
-
由 Tejun Heo 提交于
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: NTejun Heo <tj@kernel.org> Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
-
- 24 2月, 2010 1 次提交
-
-
由 Rob Clark 提交于
keventd_wq is a shared work-queue, and should not be used when we need fast deterministic response. Instead mailbox driver should use it's own private work-queue, with it's own thread, to ensure that handling of RX interrupts are not delayed by other drivers. The tasklet is still used for transmission of mbox messages. Signed-off-by: NRob Clark <rob@ti.com> Signed-off-by: NC A Subramaniam <subramaniam.ca@ti.com> Signed-off-by: NSuman Anna <s-anna@ti.com> Acked-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com> Signed-off-by: NTony Lindgren <tony@atomide.com>
-
- 23 11月, 2009 8 次提交
-
-
由 C A Subramaniam 提交于
This patch uses a tasklet implementation for sending mailbox messages. Signed-off-by: NC A Subramaniam <subramaniam.ca@ti.com> Signed-off-by: NRamesh Gupta G <grgupta@ti.com> Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com> Signed-off-by: NTony Lindgren <tony@atomide.com>
-
由 C A Subramaniam 提交于
Currently, this facilitates both the tesla and ducati sides to request for the same irq through an omap_mbox_get() call. Signed-off-by: NC A Subramaniam <subramaniam.ca@ti.com> Signed-off-by: NRamesh Gupta G <grgupta@ti.com> Acked-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com> Signed-off-by: NTony Lindgren <tony@atomide.com>
-
由 C A Subramaniam 提交于
This patch adds code changes in the mailbox driver module to add support for OMAP4 mailbox. Signed-off-by: NHari Kanigeri <h-kanigeri2@ti.com> Signed-off-by: NC A Subramaniam <subramaniam.ca@ti.com> Signed-off-by: NRamesh Gupta G <grgupta@ti.com> Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com> Signed-off-by: NTony Lindgren <tony@atomide.com>
-
由 Hiroshi DOYU 提交于
Expose omap_mbox_enable()/disable_irq() Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com>
-
由 C A Subramaniam 提交于
Also removed from tx_data Signed-off-by: NC A Subramaniam <subramaniam.ca@ti.com> Acked-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com> Signed-off-by: NTony Lindgren <tony@atomide.com>
-
由 Hiroshi DOYU 提交于
No need to handle it in isr, since irq won't happen during isr. Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com> Signed-off-by: NTony Lindgren <tony@atomide.com>
-
由 Hiroshi DOYU 提交于
It's not used at present. Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com> Signed-off-by: NTony Lindgren <tony@atomide.com>
-
由 Hiroshi DOYU 提交于
Any protocol should be handled in the upper layer and mailbox driver shouldn't care about the contents of messages. Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com> Signed-off-by: NTony Lindgren <tony@atomide.com>
-
- 21 10月, 2009 1 次提交
-
-
由 Tony Lindgren 提交于
Move the remaining headers under plat-omap/include/mach to plat-omap/include/plat. Also search and replace the files using these headers to include using the right path. This was done with: #!/bin/bash mach_dir_old="arch/arm/plat-omap/include/mach" plat_dir_new="arch/arm/plat-omap/include/plat" headers=$(cd $mach_dir_old && ls *.h) omap_dirs="arch/arm/*omap*/ \ drivers/video/omap \ sound/soc/omap" other_files="drivers/leds/leds-ams-delta.c \ drivers/mfd/menelaus.c \ drivers/mfd/twl4030-core.c \ drivers/mtd/nand/ams-delta.c" for header in $headers; do old="#include <mach\/$header" new="#include <plat\/$header" for dir in $omap_dirs; do find $dir -type f -name \*.[chS] | \ xargs sed -i "s/$old/$new/" done find drivers/ -type f -name \*omap*.[chS] | \ xargs sed -i "s/$old/$new/" for file in $other_files; do sed -i "s/$old/$new/" $file done done for header in $(ls $mach_dir_old/*.h); do git mv $header $plat_dir_new/ done Signed-off-by: NTony Lindgren <tony@atomide.com>
-
- 11 5月, 2009 2 次提交
-
-
由 Tejun Heo 提交于
Till now block layer allowed two separate modes of request execution. A request is always acquired from the request queue via elv_next_request(). After that, drivers are free to either dequeue it or process it without dequeueing. Dequeue allows elv_next_request() to return the next request so that multiple requests can be in flight. Executing requests without dequeueing has its merits mostly in allowing drivers for simpler devices which can't do sg to deal with segments only without considering request boundary. However, the benefit this brings is dubious and declining while the cost of the API ambiguity is increasing. Segment based drivers are usually for very old or limited devices and as converting to dequeueing model isn't difficult, it doesn't justify the API overhead it puts on block layer and its more modern users. Previous patches converted all block low level drivers to dequeueing model. This patch completes the API transition by... * renaming elv_next_request() to blk_peek_request() * renaming blkdev_dequeue_request() to blk_start_request() * adding blk_fetch_request() which is combination of peek and start * disallowing completion of queued (not started) requests * applying new API to all LLDs Renamings are for consistency and to break out of tree code so that it's apparent that out of tree drivers need updating. [ Impact: block request issue API cleanup, no functional change ] Signed-off-by: NTejun Heo <tj@kernel.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: Mike Miller <mike.miller@hp.com> Cc: unsik Kim <donari75@gmail.com> Cc: Paul Clements <paul.clements@steeleye.com> Cc: Tim Waugh <tim@cyberelk.net> Cc: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Cc: David S. Miller <davem@davemloft.net> Cc: Laurent Vivier <Laurent@lvivier.info> Cc: Jeff Garzik <jgarzik@pobox.com> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Grant Likely <grant.likely@secretlab.ca> Cc: Adrian McMenamin <adrian@mcmen.demon.co.uk> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com> Cc: Borislav Petkov <petkovbb@googlemail.com> Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com> Cc: Alex Dubov <oakad@yahoo.com> Cc: Pierre Ossman <drzeus@drzeus.cx> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Markus Lidel <Markus.Lidel@shadowconnect.com> Cc: Stefan Weinhuber <wein@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Pete Zaitcev <zaitcev@redhat.com> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
由 Tejun Heo 提交于
plat-omap/mailbox, floppy, viocd, mspro_block, i2o_block and mmc/card/queue are already pretty close to dequeueing model and can be converted with simple changes. Convert them. While at it, * xen-blkfront: !fs check moved downwards to share dequeue call with normal path. * mspro_block: __blk_end_request(..., blk_rq_cur_byte()) converted to __blk_end_request_cur() * mmc/card/queue: loop of __blk_end_request() converted to __blk_end_request_all() [ Impact: dequeue in-flight request ] Signed-off-by: NTejun Heo <tj@kernel.org> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Alex Dubov <oakad@yahoo.com> Cc: Markus Lidel <Markus.Lidel@shadowconnect.com> Cc: Pierre Ossman <drzeus@drzeus.cx> Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
-
- 28 4月, 2009 2 次提交
-
-
由 Tejun Heo 提交于
omap mailbox uses rq->data as the second opaque pointer to carry mbox_msg_t and rq->special message argument which is needed only for tx. Add and use omap_msg_tx_data struct for tx and use rq->special for mbox_msg_t for rx such that only rq->special is used as opaque pointer. [ Impact: cleanup rq->data usage, extra kmalloc in msg_send ] Signed-off-by: NTejun Heo <tj@kernel.org> Cc: Russell King <rmk@arm.linux.org.uk>
-
由 Tejun Heo 提交于
There are many [__]blk_end_request() call sites which call it with full request length and expect full completion. Many of them ensure that the request actually completes by doing BUG_ON() the return value, which is awkward and error-prone. This patch adds [__]blk_end_request_all() which takes @rq and @error and fully completes the request. BUG_ON() is added to to ensure that this actually happens. Most conversions are simple but there are a few noteworthy ones. * cdrom/viocd: viocd_end_request() replaced with direct calls to __blk_end_request_all(). * s390/block/dasd: dasd_end_request() replaced with direct calls to __blk_end_request_all(). * s390/char/tape_block: tapeblock_end_request() replaced with direct calls to blk_end_request_all(). [ Impact: cleanup ] Signed-off-by: NTejun Heo <tj@kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Mike Miller <mike.miller@hp.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Jeff Garzik <jgarzik@pobox.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Alex Dubov <oakad@yahoo.com> Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
-
- 24 3月, 2009 4 次提交
-
-
由 Hiroshi DOYU 提交于
Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com>
-
由 Hiroshi DOYU 提交于
Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com>
-
由 Hiroshi DOYU 提交于
no need to keep mailbox.h separately. Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com>
-
由 Hiroshi DOYU 提交于
Since "mbox->dev" doesn't exist and isn't created either at registration, this patch will create "struct device", which belongs to "omap-mailbox" class and set this pointer for the member of "struct omap_mbox". Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com>
-
- 06 9月, 2008 1 次提交
-
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 07 8月, 2008 1 次提交
-
-
由 Russell King 提交于
This just leaves include/asm-arm/plat-* to deal with. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 7月, 2008 2 次提交
-
-
由 Kay Sievers 提交于
Signed-off-by: NKay Sievers <kay.sievers@vrfy.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Kay Sievers 提交于
Kobjects do not have a limit in name size since a while, so stop pretending that they do. Signed-off-by: NKay Sievers <kay.sievers@vrfy.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 10 5月, 2008 1 次提交
-
-
由 Hiroshi DOYU 提交于
Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com> Signed-off-by: NTony Lindgren <tony@atomide.com>
-