1. 09 5月, 2009 2 次提交
  2. 08 5月, 2009 7 次提交
    • L
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/sfrench/cifs-2.6 · d7a59269
      Linus Torvalds 提交于
      * git://git.kernel.org/pub/scm/linux/kernel/git/sfrench/cifs-2.6: (32 commits)
        [CIFS] Fix double list addition in cifs posix open code
        [CIFS] Allow raw ntlmssp code to be enabled with sec=ntlmssp
        [CIFS] Fix SMB uid in NTLMSSP authenticate request
        [CIFS] NTLMSSP reenabled after move from connect.c to sess.c
        [CIFS] Remove sparse warning
        [CIFS] remove checkpatch warning
        [CIFS] Fix final user of old string conversion code
        [CIFS] remove cifs_strfromUCS_le
        [CIFS] NTLMSSP support moving into new file, old dead code removed
        [CIFS] Fix endian conversion of vcnum field
        [CIFS] Remove trailing whitespace
        [CIFS] Remove sparse endian warnings
        [CIFS] Add remaining ntlmssp flags and standardize field names
        [CIFS] Fix build warning
        cifs: fix length handling in cifs_get_name_from_search_buf
        [CIFS] Remove unneeded QuerySymlink call and fix mapping for unmapped status
        [CIFS] rename cifs_strndup to cifs_strndup_from_ucs
        Added loop check when mounting DFS tree.
        Enable dfs submounts to handle remote referrals.
        [CIFS] Remove older session setup implementation
        ...
      d7a59269
    • S
      [CIFS] Fix double list addition in cifs posix open code · 90e4ee5d
      Steve French 提交于
      Remove adding open file entry twice to lists in the file
      Do not fill file info twice in case of posix opens and creates
      Signed-off-by: NShirish Pargaonkar <shirishp@us.ibm.com>
      Signed-off-by: NSteve French <sfrench@us.ibm.com>
      90e4ee5d
    • D
      NOMMU: Don't check vm_region::vm_start is page aligned in add_nommu_region() · 8c9ed899
      David Howells 提交于
      Don't check vm_region::vm_start is page aligned in add_nommu_region() because
      the region may reflect some non-page-aligned mapped file, such as could be
      obtained from RomFS XIP.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NGreg Ungerer <gerg@uclinux.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8c9ed899
    • L
      Merge branch 'for-linus' of git://neil.brown.name/md · ee7fee0b
      Linus Torvalds 提交于
      * 'for-linus' of git://neil.brown.name/md:
        md: remove rd%d links immediately after stopping an array.
        md: remove ability to explicit set an inactive array to 'clean'.
        md: constify VFTs
        md: tidy up status_resync to handle large arrays.
        md: fix some (more) errors with bitmaps on devices larger than 2TB.
        md/raid10: don't clear bitmap during recovery if array will still be degraded.
        md: fix loading of out-of-date bitmap.
      ee7fee0b
    • L
      random: make get_random_int() more random · 8a0a9bd4
      Linus Torvalds 提交于
      It's a really simple patch that basically just open-codes the current
      "secure_ip_id()" call, but when open-coding it we now use a _static_
      hashing area, so that it gets updated every time.
      
      And to make sure somebody can't just start from the same original seed of
      all-zeroes, and then do the "half_md4_transform()" over and over until
      they get the same sequence as the kernel has, each iteration also mixes in
      the same old "current->pid + jiffies" we used - so we should now have a
      regular strong pseudo-number generator, but we also have one that doesn't
      have a single seed.
      
      Note: the "pid + jiffies" is just meant to be a tiny tiny bit of noise. It
      has no real meaning. It could be anything. I just picked the previous
      seed, it's just that now we keep the state in between calls and that will
      feed into the next result, and that should make all the difference.
      
      I made that hash be a per-cpu data just to avoid cache-line ping-pong:
      having multiple CPU's write to the same data would be fine for randomness,
      and add yet another layer of chaos to it, but since get_random_int() is
      supposed to be a fast interface I did it that way instead. I considered
      using "__raw_get_cpu_var()" to avoid any preemption overhead while still
      getting the hash be _mostly_ ping-pong free, but in the end good taste won
      out.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8a0a9bd4
    • L
      Merge master.kernel.org:/home/rmk/linux-2.6-arm · 2c66fa7e
      Linus Torvalds 提交于
      * master.kernel.org:/home/rmk/linux-2.6-arm:
        [ARM] 5507/1: support R_ARM_MOVW_ABS_NC and MOVT_ABS relocation types
        [ARM] 5506/1: davinci: DMA_32BIT_MASK --> DMA_BIT_MASK(32)
        i.MX31: Disable CPU_32v6K in mx3_defconfig.
        mx3fb: Fix compilation with CONFIG_PM
        mx27ads: move PBC mapping out of vmalloc space
        MXC: remove BUG_ON in interrupt handler
        mx31: remove mx31moboard_defconfig
        ARM: ARCH_MXC should select HAVE_CLK
        mxc : BUG in imx_dma_request
        mxc : Clean up properly when imx_dma_free() used without imx_dma_disable()
        [ARM] mv78xx0: update defconfig
        [ARM] orion5x: update defconfig
        [ARM] Kirkwood: update defconfig
        [ARM] Kconfig typo fix:  "PXA930" -> "CPU_PXA930".
        [ARM] S3C2412: Add missing cache flush in suspend code
        [ARM] S3C: Add UDIVSLOT support for newer UARTS
        [ARM] S3C64XX: Add S3C64XX_PA_IIS{0,1} to <mach/map.h>
      2c66fa7e
    • P
      [ARM] 5507/1: support R_ARM_MOVW_ABS_NC and MOVT_ABS relocation types · ae51e609
      Paul Gortmaker 提交于
      From: Bruce Ashfield <bruce.ashfield@windriver.com>
      
      To fully support the armv7-a instruction set/optimizations, support
      for the R_ARM_MOVW_ABS_NC and R_ARM_MOVT_ABS relocation types is
      required.
      
      The MOVW and MOVT are both load-immediate instructions, MOVW loads 16
      bits into the bottom half of a register, and MOVT loads 16 bits into the
      top half of a register.
      
      The relocation information for these instructions has a full 32 bit
      value, plus an addend which is stored in the 16 immediate bits in the
      instruction itself.  The immediate bits in the instruction are not
      contiguous (the register # splits it into a 4 bit and 12 bit value),
      so the addend has to be extracted accordingly and added to the value.
      The value is then split and put into the instruction; a MOVW uses the
      bottom 16 bits of the value, and a MOVT uses the top 16 bits.
      Signed-off-by: NDavid Borman <david.borman@windriver.com>
      Signed-off-by: NBruce Ashfield <bruce.ashfield@windriver.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      ae51e609
  3. 07 5月, 2009 25 次提交
  4. 06 5月, 2009 6 次提交