1. 28 1月, 2014 1 次提交
  2. 22 1月, 2014 2 次提交
  3. 13 1月, 2014 2 次提交
  4. 17 12月, 2013 1 次提交
  5. 12 10月, 2013 2 次提交
    • M
      spi: Provide common spi_message processing loop · b158935f
      Mark Brown 提交于
      The loops which SPI controller drivers use to process the list of transfers
      in a spi_message are typically very similar and have some error prone areas
      such as the handling of /CS. Help simplify drivers by factoring this code
      out into the core - if drivers provide a transfer_one() function instead
      of a transfer_one_message() function the core will handle processing at the
      message level.
      
      /CS can be controlled by either setting cs_gpio or providing a set_cs
      function. If this is not possible for hardware reasons then both can be
      omitted and the driver should continue to implement manual /CS handling.
      
      This is a first step in refactoring and it is expected that there will be
      further enhancements, for example factoring out of the mapping of transfers
      for DMA and the initiation and completion of interrupt driven transfers.
      Signed-off-by: NMark Brown <broonie@linaro.org>
      b158935f
    • M
      spi: Provide per-message prepare and unprepare operations · 2841a5fc
      Mark Brown 提交于
      Many SPI drivers perform setup and tear down on every message, usually
      doing things like DMA mapping the message. Provide hooks for them to use
      to provide such operations.
      
      This is of limited value for drivers that implement transfer_one_message()
      but will be of much greater utility with future factoring out of standard
      implementations of that function.
      Signed-off-by: NMark Brown <broonie@linaro.org>
      2841a5fc
  6. 03 10月, 2013 1 次提交
    • L
      spi: Add a spi_w8r16be() helper · 05071aa8
      Lars-Peter Clausen 提交于
      This patch adds a new spi_w8r16be() helper, which is similar to spi_w8r16()
      except that it converts the read data word from big endian to native endianness
      before returning it. The reason for introducing this new helper is that for SPI
      slave devices it is quite common that the read 16 bit data word is in big
      endian. So users of spi_w8r16() have to convert the result to native endianness
      manually. A second reason is that in this case the endianness of the return
      value of spi_w8r16() depends on its sign. If it is negative (i.e. a error code)
      it is already in native endianness, if it is positive it is in big endian. The
      sparse code checker doesn't like this kind of mixed endianness and special
      annotations are necessary to keep it quiet (E.g. casting to be16 using __force).
      Doing the conversion to native endianness in the helper function does not
      require such annotations since we are not mixing different endiannesses in the
      same variable.
      Signed-off-by: NLars-Peter Clausen <lars@metafoo.de>
      Signed-off-by: NMark Brown <broonie@linaro.org>
      05071aa8
  7. 17 9月, 2013 1 次提交
  8. 22 8月, 2013 1 次提交
    • W
      spi: DUAL and QUAD support · f477b7fb
      wangyuhang 提交于
      fix the previous patch some mistake below:
      1. DT in slave node, use "spi-tx-nbits = <1/2/4>" in place of using
         "spi-tx-dual, spi-tx-quad" directly, same to rx. So correct the
         previous way to get the property in @of_register_spi_devices().
      2. Change the value of transfer bit macro(SPI_NBITS_SINGLE, SPI_NBITS_DUAL
         SPI_NBITS_QUAD) to 0x01, 0x02 and 0x04 to match the actual wires.
      3. Add the following check
         (1)keep the tx_nbits and rx_nbits in spi_transfer is not beyond the
            single, dual and quad.
         (2)keep tx_nbits and rx_nbits are contained by @spi_device->mode
            example: if @spi_device->mode = DUAL, then tx/rx_nbits can not be set
                     to QUAD(SPI_NBITS_QUAD)
         (3)if "@spi_device->mode & SPI_3WIRE", then tx/rx_nbits should be in
            single(SPI_NBITS_SINGLE)
      Signed-off-by: Nwangyuhang <wangyuhang2014@gmail.com>
      Signed-off-by: NMark Brown <broonie@linaro.org>
      f477b7fb
  9. 02 8月, 2013 1 次提交
  10. 30 7月, 2013 1 次提交
  11. 18 7月, 2013 1 次提交
  12. 15 7月, 2013 1 次提交
  13. 02 6月, 2013 2 次提交
    • S
      spi: fix incorrect handling of min param in SPI_BPW_RANGE_MASK · eca8960a
      Stephen Warren 提交于
      SPI_BPW_RANGE_MASK is intended to work by calculating two masks; one
      representing support for all bits up-to-and-including the "max" supported
      value, and one representing support for all bits up-to-but-not-including
      the "min" supported value, and then taking the difference between the
      two, resulting in a mask representing support for all bits between
      (inclusive) the min and max values.
      
      However, the second mask ended up representing all bits up-to-and-
      including rather up-to-but-not-including. Fix this bug.
      Signed-off-by: NStephen Warren <swarren@nvidia.com>
      Signed-off-by: NMark Brown <broonie@linaro.org>
      eca8960a
    • S
      spi: fix undefined behaviour in SPI_BPW_RANGE_MASK · 4dd9572a
      Stephen Warren 提交于
      The parameters to SPI_BPW_RANGE_MASK() are in the range 1..32. If 32 is
      used as a parameter, part of the expression is "1 << 32". Since 32 is >=
      the size of the type in use, such a shift is undefined behaviour. Add
      macro SPI_BIT_MASK to Implement a special case and thus avoid undefined
      behaviour. Use this new macro rather than BIT() when implementing
      SPI_BPW_RANGE_MASK().
      
      This fixes build warnings such as:
      drivers/spi/spi-gpio.c:446:2: warning: left shift count >= width of type [enabled by default]
      
      SPI_BPW_MASK() already avoids this, since its parameter is also in range
      1..32, yet it only shifts by up to one less than the input parameter.
      Reported-by: NFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: NStephen Warren <swarren@nvidia.com>
      Signed-off-by: NMark Brown <broonie@linaro.org>
      4dd9572a
  14. 22 5月, 2013 1 次提交
  15. 07 4月, 2013 1 次提交
    • A
      spi: Initialize cs_gpio and cs_gpios with -ENOENT · 446411e1
      Andreas Larsson 提交于
      The return value from of_get_named_gpio is -ENOENT when the given index
      matches a hole in the "cs-gpios" property phandle list. However, the
      default value of cs_gpio in struct spi_device and entries of cs_gpios in
      struct spi_master is -EINVAL, which is documented to indicate that a
      GPIO line should not be used for the given spi_device.
      
      This sets the default value of cs_gpio in struct spi_device and entries
      of cs_gpios in struct spi_master to -ENOENT. Thus, -ENOENT is the only
      value used to indicate that no GPIO line should be used.
      Signed-off-by: NAndreas Larsson <andreas@gaisler.com>
      Signed-off-by: NGrant Likely <grant.likely@secretlab.ca>
      446411e1
  16. 01 4月, 2013 1 次提交
  17. 11 2月, 2013 1 次提交
  18. 09 2月, 2013 1 次提交
    • L
      spi: Add helper functions for setting up transfers · 6d9eecd4
      Lars-Peter Clausen 提交于
      Quite often the pattern used for setting up and transferring a synchronous SPI
      transaction looks very much like the following:
      
      	struct spi_message msg;
      	struct spi_transfer xfers[] = {
      		...
      	};
      
      	spi_message_init(&msg);
      	spi_message_add_tail(&xfers[0], &msg);
      	...
      	spi_message_add_tail(&xfers[ARRAY_SIZE(xfers) - 1], &msg);
      
      	ret = spi_sync(&msg);
      
      This patch adds two new helper functions for handling this case. The first
      helper function spi_message_init_with_transfers() takes a spi_message and an
      array of spi_transfers. It will initialize the message and then call
      spi_message_add_tail() for each transfer in the array. E.g. the following
      
      	spi_message_init(&msg);
      	spi_message_add_tail(&xfers[0], &msg);
      	...
      	spi_message_add_tail(&xfers[ARRAY_SIZE(xfers) - 1], &msg);
      
      can be rewritten as
      
      	spi_message_init_with_transfers(&msg, xfers, ARRAY_SIZE(xfers));
      
      The second function spi_sync_transfer() takes a SPI device and an array of
      spi_transfers. It will allocate a new spi_message (on the stack) and add all
      transfers in the array to the message. Finally it will call spi_sync() on the
      message.
      
      E.g. the follwing
      
      	struct spi_message msg;
      	struct spi_transfer xfers[] = {
      		...
      	};
      
      	spi_message_init(&msg);
      	spi_message_add_tail(&xfers[0], &msg);
      	...
      	spi_message_add_tail(&xfers[ARRAY_SIZE(xfers) - 1], &msg);
      
      	ret = spi_sync(spi, &msg);
      
      can be rewritten as
      
      	struct spi_transfer xfers[] = {
      		...
      	};
      
      	ret = spi_sync_transfer(spi, xfers, ARRAY_SIZE(xfers));
      
      A coccinelle script to find such instances will follow.
      Signed-off-by: NLars-Peter Clausen <lars@metafoo.de>
      Reviewed-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      Signed-off-by: NJonathan Cameron <jic23@kernel.org>
      6d9eecd4
  19. 22 11月, 2012 1 次提交
  20. 28 4月, 2012 1 次提交
  21. 10 3月, 2012 1 次提交
    • S
      spi: Trivial warning fix · 8f53602b
      Shubhrajyoti D 提交于
      The loop count i traverses for ntrans which is unsigned
      so make the loop count i also unsigned.
      
      Fix the below warning
      In file included from drivers/spi/spi-omap2-mcspi.c:38:
      include/linux/spi/spi.h: In function 'spi_message_alloc':
      include/linux/spi/spi.h:556: warning: comparison between signed and unsigned integer expressions
      
      Cc: Vitaly Wool <vwool@ru.mvista.com>
      Signed-off-by: NShubhrajyoti D <shubhrajyoti@ti.com>
      Signed-off-by: NGrant Likely <grant.likely@secretlab.ca>
      8f53602b
  22. 08 3月, 2012 1 次提交
    • L
      spi: create a message queueing infrastructure · ffbbdd21
      Linus Walleij 提交于
      This rips the message queue in the PL022 driver out and pushes
      it into (optional) common infrastructure. Drivers that want to
      use the message pumping thread will need to define the new
      per-messags transfer methods and leave the deprecated transfer()
      method as NULL.
      
      Most of the design is described in the documentation changes that
      are included in this patch.
      
      Since there is a queue that need to be stopped when the system
      is suspending/resuming, two new calls are implemented for the
      device drivers to call in their suspend()/resume() functions:
      spi_master_suspend() and spi_master_resume().
      
      ChangeLog v1->v2:
      - Remove Kconfig entry and do not make the queue support optional
        at all, instead be more agressive and have it as part of the
        compulsory infrastructure.
      - If the .transfer() method is implemented, delete print a small
        deprecation notice and do not start the transfer pump.
      - Fix a bitrotted comment.
      ChangeLog v2->v3:
      - Fix up a problematic sequence courtesy of Chris Blair.
      - Stop rather than destroy the queue on suspend() courtesy of
        Chris Blair.
      Signed-off-by: NChris Blair <chris.blair@stericsson.com>
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      Tested-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      Reviewed-by: NMark Brown <broonie@opensource.wolfsonmicro.com>
      Signed-off-by: NGrant Likely <grant.likely@secretlab.ca>
      ffbbdd21
  23. 18 11月, 2011 1 次提交
  24. 20 5月, 2011 1 次提交
  25. 22 10月, 2010 1 次提交
  26. 18 8月, 2010 1 次提交
  27. 29 6月, 2010 1 次提交
    • E
      spi/mmc_spi: SPI bus locking API, using mutex · cf32b71e
      Ernst Schwab 提交于
      SPI bus locking API to allow exclusive access to the SPI bus, especially, but
      not limited to, for the mmc_spi driver.
      
      Coded according to an outline from Grant Likely; here is his
      specification (accidentally swapped function names corrected):
      
      It requires 3 things to be added to struct spi_master.
      - 1 Mutex
      - 1 spin lock
      - 1 flag.
      
      The mutex protects spi_sync, and provides sleeping "for free"
      The spinlock protects the atomic spi_async call.
      The flag is set when the lock is obtained, and checked while holding
      the spinlock in spi_async().  If the flag is checked, then spi_async()
      must fail immediately.
      
      The current runtime API looks like this:
      spi_async(struct spi_device*, struct spi_message*);
      spi_sync(struct spi_device*, struct spi_message*);
      
      The API needs to be extended to this:
      spi_async(struct spi_device*, struct spi_message*)
      spi_sync(struct spi_device*, struct spi_message*)
      spi_bus_lock(struct spi_master*)  /* although struct spi_device* might
      be easier */
      spi_bus_unlock(struct spi_master*)
      spi_async_locked(struct spi_device*, struct spi_message*)
      spi_sync_locked(struct spi_device*, struct spi_message*)
      
      Drivers can only call the last two if they already hold the spi_master_lock().
      
      spi_bus_lock() obtains the mutex, obtains the spin lock, sets the
      flag, and releases the spin lock before returning.  It doesn't even
      need to sleep while waiting for "in-flight" spi_transactions to
      complete because its purpose is to guarantee no additional
      transactions are added.  It does not guarantee that the bus is idle.
      
      spi_bus_unlock() clears the flag and releases the mutex, which will
      wake up any waiters.
      
      The difference between spi_async() and spi_async_locked() is that the
      locked version bypasses the check of the lock flag.  Both versions
      need to obtain the spinlock.
      
      The difference between spi_sync() and spi_sync_locked() is that
      spi_sync() must hold the mutex while enqueuing a new transfer.
      spi_sync_locked() doesn't because the mutex is already held.  Note
      however that spi_sync must *not* continue to hold the mutex while
      waiting for the transfer to complete, otherwise only one transfer
      could be queued up at a time!
      
      Almost no code needs to be written.  The current spi_async() and
      spi_sync() can probably be renamed to __spi_async() and __spi_sync()
      so that spi_async(), spi_sync(), spi_async_locked() and
      spi_sync_locked() can just become wrappers around the common code.
      
      spi_sync() is protected by a mutex because it can sleep
      spi_async() needs to be protected with a flag and a spinlock because
      it can be called atomically and must not sleep
      Signed-off-by: NErnst Schwab <eschwab@online.de>
      [grant.likely@secretlab.ca: use spin_lock_irqsave()]
      Signed-off-by: NGrant Likely <grant.likely@secretlab.ca>
      Tested-by: NMatt Fleming <matt@console-pimps.org>
      Tested-by: NAntonio Ospite <ospite@studenti.unina.it>
      cf32b71e
  28. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  29. 23 9月, 2009 3 次提交
  30. 01 7月, 2009 2 次提交
    • D
      spi: add spi_master flag word · 70d6027f
      David Brownell 提交于
      Add a new spi_master.flags word listing constraints relevant to that
      controller.  Define the first constraint bit: a half duplex restriction.
      Include that constraint in the OMAP1 MicroWire controller driver.
      
      Have the mmc_spi host be the first customer of this flag.  Its coding
      relies heavily on full duplex transfers, so it must fail when the
      underlying controller driver won't perform them.
      
      (The spi_write_then_read routine could use it too: use the
      temporarily-withdrawn full-duplex speedup unless this flag is set, in
      which case the existing code applies.  Similarly, any spi_master
      implementing only SPI_3WIRE should set the flag.)
      Signed-off-by: NDavid Brownell <dbrownell@users.sourceforge.net>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      70d6027f
    • D
      spi: new spi->mode bits · b55f627f
      David Brownell 提交于
      Add two new spi_device.mode bits to accomodate more protocol options, and
      pass them through to usermode drivers:
      
       * SPI_NO_CS ... a second 3-wire variant, where the chipselect
         line is removed instead of a data line; transfers are still
         full duplex.
      
         This obviously has STRONG protocol implications since the
         chipselect transitions can't be used to synchronize state
         transitions with the SPI master.
      
       * SPI_READY ... defines open drain signal that's pulled low
         to pause the clock.  This defines a 5-wire variant (normal
         4-wire SPI plus READY) and two 4-wire variants (READY plus
         each of the 3-wire flavors).
      
         Such hardware flow control can be a big win.  There are ADC
         converters and flash chips that expose READY signals, but not
         many host controllers support it today.
      
      The spi_bitbang code should be changed to use SPI_NO_CS instead of its
      current nonportable hack.  That's a mode most hardware can easily support
      (unlike SPI_READY).
      Signed-off-by: NDavid Brownell <dbrownell@users.sourceforge.net>
      Cc: "Paulraj, Sandeep" <s-paulraj@ti.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b55f627f
  31. 19 6月, 2009 2 次提交
  32. 22 4月, 2009 1 次提交