1. 12 1月, 2012 5 次提交
    • S
      mmc: core: Use delayed work in clock gating framework · 597dd9d7
      Sujit Reddy Thumma 提交于
      Current clock gating framework disables the MCI clock as soon as the
      request is completed and enables it when a request arrives. This aggressive
      clock gating framework, when enabled, cause following issues:
      
      When there are back-to-back requests from the Queue layer, we unnecessarily
      end up disabling and enabling the clocks between these requests since 8MCLK
      clock cycles is a very short duration compared to the time delay between
      back to back requests reaching the MMC layer. This overhead can effect the
      overall performance depending on how long the clock enable and disable
      calls take which is platform dependent. For example on some platforms we
      can have clock control not on the local processor, but on a different
      subsystem and the time taken to perform the clock enable/disable can add
      significant overhead.
      
      Also if the host controller driver decides to disable the host clock too
      when mmc_set_ios function is called with ios.clock=0, it adds additional
      delay and it is highly possible that the next request had already arrived
      and unnecessarily blocked in enabling the clocks. This is seen frequently
      when the processor is executing at high speeds and in multi-core platforms
      thus reduces the overall throughput compared to if clock gating is
      disabled.
      
      Fix this by delaying turning off the clocks by posting request on
      delayed workqueue. Also cancel the unscheduled pending work, if any,
      when there is access to card.
      
      sysfs entry is provided to tune the delay as needed, default
      value set to 200ms.
      Signed-off-by: NSujit Reddy Thumma <sthumma@codeaurora.org>
      Acked-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NChris Ball <cjb@laptop.org>
      597dd9d7
    • C
      mmc: card: Use manufacturer ID symbols in card quirks. · c59d4473
      Chris Ball 提交于
      No functional change; adds macros for card manufacturer IDs.
      Signed-off-by: NChris Ball <cjb@laptop.org>
      Cc: Andrei E. Warkentin <andrey.warkentin@gmail.com>
      Cc: Stefan Nilsson XK <stefan.xk.nilsson@stericsson.com>
      c59d4473
    • G
      mmc: debugfs: expose the SDCLK frq in sys ios · df16219f
      Giuseppe CAVALLARO 提交于
      This patch is to expose the actual SDCLK frequency in
      /sys/kernel/debug/mmcX/ios entry.
      
      For example, if the max clk for a normal speed card is 20MHz this
      is reported in /sys/kernel/debug/mmcX/ios.  Unfortunately the actual
      SDCLK frequency (i.e. Baseclock / divisor) is not reported at all:
      for example, in that case, on Arasan HC, it should be 48/4=12 (MHz).
      Signed-off-by: NGiuseppe Cavallaro <peppe.cavallaro@st.com>
      Acked-by: NAdrian Hunter <adrian.hunter@intel.com>
      Signed-off-by: NChris Ball <cjb@laptop.org>
      df16219f
    • S
      mmc: sdio: Fix to support any block size optimally · 052d81da
      Stefan Nilsson XK 提交于
      This patch allows any block size to be set on the SDIO link,
      and still have an arbitrary sized packet (adjusted in size by
      using sdio_align_size) transferred in an optimal way
      (preferably one transfer).
      
      Previously if the block size was larger than the default of
      512 bytes and the transfer size was exactly one block size
      (possibly thanks to using sdio_align_size to get an optimal
      transfer size), it was sent as a number of byte transfers instead
      of one block transfer. Also if the number of blocks was
      (max_blocks * N) + 1, the tranfer would be conducted with a number
      of blocks and finished off with a number of byte transfers.
      
      When doing this change it was also possible to break out the quirk
      for broken byte mode in a much cleaner way, and collect the logic of
      when to do byte or block transfer in one function instead of two.
      Signed-off-by: NStefan Nilsson XK <stefan.xk.nilsson@stericsson.com>
      Signed-off-by: NUlf Hansson <ulf.hansson@stericsson.com>
      Acked-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NChris Ball <cjb@laptop.org>
      052d81da
    • Q
      mmc: sd: Macro name cleanup for high speed dtr · fffe5d5a
      Qiang Liu 提交于
      Add new macros for the high speed 50MHz case, rather than having
      a confusing reuse of the value for UHS SDR50, which is 100MHz.
      Reported-by: NAaron Lu <aaron.lu@amd.com>
      Signed-off-by: NChris Ball <cjb@laptop.org>
      fffe5d5a
  2. 24 12月, 2011 14 次提交
  3. 23 12月, 2011 18 次提交
  4. 22 12月, 2011 3 次提交
    • S
      VFS: Fix race between CPU hotplug and lglocks · e30e2fdf
      Srivatsa S. Bhat 提交于
      Currently, the *_global_[un]lock_online() routines are not at all synchronized
      with CPU hotplug. Soft-lockups detected as a consequence of this race was
      reported earlier at https://lkml.org/lkml/2011/8/24/185. (Thanks to Cong Meng
      for finding out that the root-cause of this issue is the race condition
      between br_write_[un]lock() and CPU hotplug, which results in the lock states
      getting messed up).
      
      Fixing this race by just adding {get,put}_online_cpus() at appropriate places
      in *_global_[un]lock_online() is not a good option, because, then suddenly
      br_write_[un]lock() would become blocking, whereas they have been kept as
      non-blocking all this time, and we would want to keep them that way.
      
      So, overall, we want to ensure 3 things:
      1. br_write_lock() and br_write_unlock() must remain as non-blocking.
      2. The corresponding lock and unlock of the per-cpu spinlocks must not happen
         for different sets of CPUs.
      3. Either prevent any new CPU online operation in between this lock-unlock, or
         ensure that the newly onlined CPU does not proceed with its corresponding
         per-cpu spinlock unlocked.
      
      To achieve all this:
      (a) We introduce a new spinlock that is taken by the *_global_lock_online()
          routine and released by the *_global_unlock_online() routine.
      (b) We register a callback for CPU hotplug notifications, and this callback
          takes the same spinlock as above.
      (c) We maintain a bitmap which is close to the cpu_online_mask, and once it is
          initialized in the lock_init() code, all future updates to it are done in
          the callback, under the above spinlock.
      (d) The above bitmap is used (instead of cpu_online_mask) while locking and
          unlocking the per-cpu locks.
      
      The callback takes the spinlock upon the CPU_UP_PREPARE event. So, if the
      br_write_lock-unlock sequence is in progress, the callback keeps spinning,
      thus preventing the CPU online operation till the lock-unlock sequence is
      complete. This takes care of requirement (3).
      
      The bitmap that we maintain remains unmodified throughout the lock-unlock
      sequence, since all updates to it are managed by the callback, which takes
      the same spinlock as the one taken by the lock code and released only by the
      unlock routine. Combining this with (d) above, satisfies requirement (2).
      
      Overall, since we use a spinlock (mentioned in (a)) to prevent CPU hotplug
      operations from racing with br_write_lock-unlock, requirement (1) is also
      taken care of.
      
      By the way, it is to be noted that a CPU offline operation can actually run
      in parallel with our lock-unlock sequence, because our callback doesn't react
      to notifications earlier than CPU_DEAD (in order to maintain our bitmap
      properly). And this means, since we use our own bitmap (which is stale, on
      purpose) during the lock-unlock sequence, we could end up unlocking the
      per-cpu lock of an offline CPU (because we had locked it earlier, when the
      CPU was online), in order to satisfy requirement (2). But this is harmless,
      though it looks a bit awkward.
      Debugged-by: NCong Meng <mc@linux.vnet.ibm.com>
      Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Cc: stable@vger.kernel.org
      e30e2fdf
    • L
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net · ecefc36b
      Linus Torvalds 提交于
      * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net:
        net: Add a flow_cache_flush_deferred function
        ipv4: reintroduce route cache garbage collector
        net: have ipconfig not wait if no dev is available
        sctp: Do not account for sizeof(struct sk_buff) in estimated rwnd
        asix: new device id
        davinci-cpdma: fix locking issue in cpdma_chan_stop
        sctp: fix incorrect overflow check on autoclose
        r8169: fix Config2 MSIEnable bit setting.
        llc: llc_cmsg_rcv was getting called after sk_eat_skb.
        net: bpf_jit: fix an off-one bug in x86_64 cond jump target
        iwlwifi: update SCD BC table for all SCD queues
        Revert "Bluetooth: Revert: Fix L2CAP connection establishment"
        Bluetooth: Clear RFCOMM session timer when disconnecting last channel
        Bluetooth: Prevent uninitialized data access in L2CAP configuration
        iwlwifi: allow to switch to HT40 if not associated
        iwlwifi: tx_sync only on PAN context
        mwifiex: avoid double list_del in command cancel path
        ath9k: fix max phy rate at rate control init
        nfc: signedness bug in __nci_request()
        iwlwifi: do not set the sequence control bit is not needed
      ecefc36b
    • L
      Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound · d5ed5e48
      Linus Torvalds 提交于
      * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound:
        ALSA: atmel/ac97c: using software reset instead hardware reset if not available
      d5ed5e48