1. 16 10月, 2008 1 次提交
  2. 20 8月, 2008 1 次提交
  3. 03 8月, 2008 1 次提交
    • D
      firewire: Preserve response data alignment bug when it is harmless · 8401d92b
      David Moore 提交于
      Recently, a bug having to do with the alignment of transaction response
      data was fixed.  However, some apps such as libdc1394 relied on the
      presence of that bug in order to function correctly.  In order to stay
      compatible with old versions of those apps, this patch preserves the bug
      in cases where it is harmless to normal operation (such as the single
      quadlet read) due to a simple duplication of data.  This guarantees
      maximum compatability for those users who are using the old app with the
      fixed kernel.
      Signed-off-by: NDavid Moore <dcm@acm.org>
      Signed-off-by: NStefan Richter <stefanr@s5r6.in-berlin.de>
      8401d92b
  4. 27 7月, 2008 1 次提交
    • F
      dma-mapping: add the device argument to dma_mapping_error() · 8d8bb39b
      FUJITA Tomonori 提交于
      Add per-device dma_mapping_ops support for CONFIG_X86_64 as POWER
      architecture does:
      
      This enables us to cleanly fix the Calgary IOMMU issue that some devices
      are not behind the IOMMU (http://lkml.org/lkml/2008/5/8/423).
      
      I think that per-device dma_mapping_ops support would be also helpful for
      KVM people to support PCI passthrough but Andi thinks that this makes it
      difficult to support the PCI passthrough (see the above thread).  So I
      CC'ed this to KVM camp.  Comments are appreciated.
      
      A pointer to dma_mapping_ops to struct dev_archdata is added.  If the
      pointer is non NULL, DMA operations in asm/dma-mapping.h use it.  If it's
      NULL, the system-wide dma_ops pointer is used as before.
      
      If it's useful for KVM people, I plan to implement a mechanism to register
      a hook called when a new pci (or dma capable) device is created (it works
      with hot plugging).  It enables IOMMUs to set up an appropriate
      dma_mapping_ops per device.
      
      The major obstacle is that dma_mapping_error doesn't take a pointer to the
      device unlike other DMA operations.  So x86 can't have dma_mapping_ops per
      device.  Note all the POWER IOMMUs use the same dma_mapping_error function
      so this is not a problem for POWER but x86 IOMMUs use different
      dma_mapping_error functions.
      
      The first patch adds the device argument to dma_mapping_error.  The patch
      is trivial but large since it touches lots of drivers and dma-mapping.h in
      all the architecture.
      
      This patch:
      
      dma_mapping_error() doesn't take a pointer to the device unlike other DMA
      operations.  So we can't have dma_mapping_ops per device.
      
      Note that POWER already has dma_mapping_ops per device but all the POWER
      IOMMUs use the same dma_mapping_error function.  x86 IOMMUs use device
      argument.
      
      [akpm@linux-foundation.org: fix sge]
      [akpm@linux-foundation.org: fix svc_rdma]
      [akpm@linux-foundation.org: build fix]
      [akpm@linux-foundation.org: fix bnx2x]
      [akpm@linux-foundation.org: fix s2io]
      [akpm@linux-foundation.org: fix pasemi_mac]
      [akpm@linux-foundation.org: fix sdhci]
      [akpm@linux-foundation.org: build fix]
      [akpm@linux-foundation.org: fix sparc]
      [akpm@linux-foundation.org: fix ibmvscsi]
      Signed-off-by: NFUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: Muli Ben-Yehuda <muli@il.ibm.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Avi Kivity <avi@qumranet.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8d8bb39b
  5. 26 7月, 2008 2 次提交
  6. 25 7月, 2008 1 次提交
  7. 20 7月, 2008 1 次提交
    • J
      firewire: queue the right number of data · f9543d0a
      JiSheng Zhang 提交于
      There will be 4 padding bytes in struct fw_cdev_event_response on some platforms
      The member:__u32 data will point to these padding bytes. While queue the
      response and data in complete_transaction in fw-cdev.c, it will queue like this:
      |response(excluding padding bytes)|4 padding bytes|4 padding bytes|data.
      It queue 4 extra bytes. That is to say it use "&response + sizeof(response)"
      while other place of kernel and userspace library use "&response + offsetof
      (typeof(response), data)". So it will lost the last 4 bytes of data. This patch
      can fix it while not changing the struct definition.
      Signed-off-by: NJiSheng Zhang <jszhang3@mail.ustc.edu.cn>
      
      This fixes responses to outbound block read requests on 64bit architectures.
      Tested on i686, x86-64, and x86-64 with i686 userland, using firecontrol and
      gscanbus.
      Signed-off-by: NStefan Richter <stefanr@s5r6.in-berlin.de>
      f9543d0a
  8. 14 7月, 2008 11 次提交
    • S
      firewire: warn on unfinished transactions during card removal · 1e8afea1
      Stefan Richter 提交于
      After card->done and card->work are completed, any remaining pending
      request would be a bug.  We cannot safely complete a transaction at
      that point anymore.
      
      IOW card users must not drop their last fw_card reference (usually
      indirect references through fw_device references) before their last
      outbound transaction through that card was finished.
      Signed-off-by: NStefan Richter <stefanr@s5r6.in-berlin.de>
      1e8afea1
    • S
      firewire: small fw_fill_request cleanup · b9549bc6
      Stefan Richter 提交于
        - better name for a function argument
        - removal of a local variable which became unnecessary after
          "fully initialize fw_transaction before marking it pending"
      Signed-off-by: NStefan Richter <stefanr@s5r6.in-berlin.de>
      b9549bc6
    • S
      firewire: fully initialize fw_transaction before marking it pending · e9aeb46c
      Stefan Richter 提交于
      In theory, card->flush_timer could already access a transaction between
      fw_send_request()'s spin_unlock_irqrestore and the rest of what happens
      in fw_send_request().  This would happen if the process which sends the
      request is preempted and put to sleep right after spin_unlock_irqrestore
      for longer than 100ms.
      
      Therefore we fill in everything in struct fw_transaction at which the
      flush_timer might look at before we lift the lock.
      
      To do:  Ensure that the timer does not pick up the transaction before
      the time of the AT request event plus split transaction timeout.
      Signed-off-by: NStefan Richter <stefanr@s5r6.in-berlin.de>
      e9aeb46c
    • S
      firewire: fix race of bus reset with request transmission · 792a6102
      Stefan Richter 提交于
      Reported by Jay Fenlason:  A bus reset tasklet may call
      fw_flush_transactions and touch transactions (call their callback which
      will free them) while the context which submitted the transaction is
      still inserting it into the transmission queue.
      
      A simple solution to this problem is to _not_ "flush" the transactions
      because of a bus reset (complete the transcations as 'cancelled').  They
      will now simply time out (completed as 'cancelled' by the split-timeout
      timer).
      
      Jay Fenlason thought of this fix too but I was quicker to type it out.
      :-)
      
      Background:
      Contexts which access an instance of struct fw_transaction are:
       1. the submitter, until it inserted the packet which is embedded in the
          transaction into the AT req DMA,
       2. the AsReqTrContext tasklet when the request packet was acked by the
          responder node or transmission to the responder failed,
       3. the AsRspRcvContext tasklet when it found a request which matched
          an incoming response,
       4. the card->flush_timer when it picks up timed-out transactions to
          cancel them,
       5. the bus reset tasklet when it cancels transactions (this access is
          eliminated by this patch),
       6. a process which shuts down an fw_card (unregisters it from fw-core
          when the controller is unbound from fw-ohci) --- although in this
          case there shouldn't really be any transactions anymore because we
          wait until all card users finished their business with the card.
      
      All of these contexts run concurrently (except for the 6th, presumably).
      The 1st is safe against the 2nd and 3rd because of the way how a request
      packet is carefully submitted to the hardware.  A race between 2nd and
      3rd has been fixed a while ago (bug 9617).  The 4th is almost safe
      against 1st, 2nd, 3rd;  there are issues with it if huge scheduling
      latencies occur, to be fixed separately.  The 5th looks safe against
      2nd, 3rd, and 4th but is unsafe against 1st.  Maybe this could be fixed
      with an explicit state variable in struct fw_transaction.  But this
      would require fw_transaction to be rewritten as only dynamically
      allocatable object with reference counting --- not a good solution if we
      also can simply kill this 5th accessing context (replace it by the 4th).
      Signed-off-by: NStefan Richter <stefanr@s5r6.in-berlin.de>
      792a6102
    • S
      firewire: don't respond to broadcast write requests · a7ea6782
      Stefan Richter 提交于
      Contrary to a comment in the source, request->ack of a broadcast write
      request can be ACK_PENDING.  Hence the existing check is insufficient.
      
      Debug dmesg before:
      AR spd 0 tl 00, ffc0 -> ffff, ack_pending , QW req, fffff0000234 = ffffffff
      AT spd 0 tl 00, ffff -> ffc0, ack_complete, W resp
      And the requesting node (linux1394) reports an unsolicited response.
      
      Debug dmesg after:
      AR spd 0 tl 00, ffc0 -> ffff, ack_pending , QW req, fffff0000234 = ffffffff
      Signed-off-by: NStefan Richter <stefanr@s5r6.in-berlin.de>
      a7ea6782
    • S
      firewire: clean up fw_card reference counting · 459f7923
      Stefan Richter 提交于
      This is a functionally equivalent replacement of the current reference
      counting of struct fw_card instances.  It only converts it to common
      idioms as suggested by Kristian Høgsberg:
        - struct kref replaces atomic_t as the counter.
        - wait_for_completion is used to wait for all card users to complete.
      
      BTW, it may make sense to count card->flush_timer and card->work as
      card users too.
      Signed-off-by: NStefan Richter <stefanr@s5r6.in-berlin.de>
      459f7923
    • S
      firewire: clean up some includes · 2147ef20
      Stefan Richter 提交于
      Signed-off-by: NStefan Richter <stefanr@s5r6.in-berlin.de>
      2147ef20
    • S
      firewire: remove unused struct members · bbf094cf
      Stefan Richter 提交于
      Signed-off-by: NStefan Richter <stefanr@s5r6.in-berlin.de>
      bbf094cf
    • S
      firewire: implement broadcast_channel CSR for 1394a compliance · e534fe16
      Stefan Richter 提交于
      See IEEE 1394a clause 8.3.2.3.11.
      Signed-off-by: NStefan Richter <stefanr@s5r6.in-berlin.de>
      e534fe16
    • S
      firewire: fw-sbp2: spin disks down on suspend and shutdown · 2635f96f
      Stefan Richter 提交于
      This instructs sd_mod to send START STOP UNIT on suspend and resume,
      and on driver unbinding or unloading (including when the system is shut
      down).
      
      We don't do this though if multiple initiators may log in to the target.
      Signed-off-by: NStefan Richter <stefanr@s5r6.in-berlin.de>
      Tested-by: NTino Keitel <tino.keitel@gmx.de>
      2635f96f
    • S
      firewire: fw-sbp2: fix spindown for PL-3507 and TSB42AA9 firmwares · ffcaade3
      Stefan Richter 提交于
      Reported by Tino Keitel:  PL-3507 with firmware from Prolific does not
      spin down the disk on START STOP UNIT with power condition = 0 and start
      = 0.  It does however work with power condition = 2 or 3.
      
      Also found while investigating this:  DViCO Momobay CX-1 and FX-3A (TI
      TSB42AA9/A based) become unresponsive after START STOP UNIT with power
      condition = 0 and start = 0.  They stay responsive if power condition is
      set when stopping the motor.
      Signed-off-by: NStefan Richter <stefanr@s5r6.in-berlin.de>
      Tested-by: NTino Keitel <tino.keitel@gmx.de>
      ffcaade3
  9. 28 6月, 2008 1 次提交
  10. 19 6月, 2008 8 次提交
  11. 21 5月, 2008 1 次提交
    • J
      firewire: prevent userspace from accessing shut down devices · 551f4cb9
      Jay Fenlason 提交于
      If userspace ignores the POLLERR bit from poll(), and only attempts to
      read() the device when POLLIN is set, it can still make ioctl() calls on
      a device that has been removed from the system.  The node_id and
      generation returned by GET_INFO will be outdated, but INITIATE_BUS_RESET
      would still cause a bus reset, and GET_CYCLE_TIMER will return data.
      And if you guess the correct generation to use, you can send requests to
      a different device on the bus, and get responses back.
      
      This patch prevents open, ioctl, compat_ioctl, and mmap against shutdown
      devices.
      Signed-off-by: NJay Fenlason <fenlason@redhat.com>
      Signed-off-by: NStefan Richter <stefanr@s5r6.in-berlin.de>
      551f4cb9
  12. 02 5月, 2008 2 次提交
    • B
      [SCSI] Let scsi_cmnd->cmnd use request->cmd buffer · 64a87b24
      Boaz Harrosh 提交于
       - struct scsi_cmnd had a 16 bytes command buffer of its own.
         This is an unnecessary duplication and copy of request's
         cmd. It is probably left overs from the time that scsi_cmnd
         could function without a request attached. So clean that up.
      
       - Once above is done, few places, apart from scsi-ml, needed
         adjustments due to changing the data type of scsi_cmnd->cmnd.
      
       - Lots of drivers still use MAX_COMMAND_SIZE. So I have left
         that #define but equate it to BLK_MAX_CDB. The way I see it
         and is reflected in the patch below is.
         MAX_COMMAND_SIZE - means: The longest fixed-length (*) SCSI CDB
                            as per the SCSI standard and is not related
                            to the implementation.
         BLK_MAX_CDB.     - The allocated space at the request level
      
       - I have audit all ISA drivers and made sure none use ->cmnd in a DMA
         Operation. Same audit was done by Andi Kleen.
      
      (*)fixed-length here means commands that their size can be determined
         by their opcode and the CDB does not carry a length specifier, (unlike
         the VARIABLE_LENGTH_CMD(0x7f) command). This is actually not exactly
         true and the SCSI standard also defines extended commands and
         vendor specific commands that can be bigger than 16 bytes. The kernel
         will support these using the same infrastructure used for VARLEN CDB's.
         So in effect MAX_COMMAND_SIZE means the maximum size command
         scsi-ml supports without specifying a cmd_len by ULD's
      Signed-off-by: NBoaz Harrosh <bharrosh@panasas.com>
      Signed-off-by: NJames Bottomley <James.Bottomley@HansenPartnership.com>
      64a87b24
    • S
      firewire: fw-sbp2: log scsi_target ID at release · f32ddadd
      Stefan Richter 提交于
      Makes the good-by message more informative.
      Signed-off-by: NStefan Richter <stefanr@s5r6.in-berlin.de>
      Signed-off-by: NJarod Wilson <jwilson@redhat.com>
      f32ddadd
  13. 19 4月, 2008 2 次提交
  14. 18 4月, 2008 7 次提交